A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
2016-10-31
statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally
Multiobjective optimisation of bogie suspension to boost speed on curves
NASA Astrophysics Data System (ADS)
Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor
2016-01-01
To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.
NASA Astrophysics Data System (ADS)
Harré, Michael S.
2013-02-01
Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.
NASA Astrophysics Data System (ADS)
Vanhuyse, Johan; Deckers, Elke; Jonckheere, Stijn; Pluymers, Bert; Desmet, Wim
2016-02-01
The Biot theory is commonly used for the simulation of the vibro-acoustic behaviour of poroelastic materials. However, it relies on a number of material parameters. These can be hard to characterize and require dedicated measurement setups, yielding a time-consuming and costly characterisation. This paper presents a characterisation method which is able to identify all material parameters using only an impedance tube. The method relies on the assumption that the sample is clamped within the tube, that the shear wave is excited and that the acoustic field is no longer one-dimensional. This paper numerically shows the potential of the developed method. It therefore performs a sensitivity analysis of the quantification parameters, i.e. reflection coefficients and relative pressures, and a parameter estimation using global optimisation methods. A 3-step procedure is developed and validated. It is shown that even in the presence of numerically simulated noise this procedure leads to a robust parameter estimation.
NASA Astrophysics Data System (ADS)
van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan
2017-06-01
Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
NASA Astrophysics Data System (ADS)
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
NASA Astrophysics Data System (ADS)
Grimminck, Dennis L. A. G.; Vasa, Suresh K.; Meerts, W. Leo; Kentgens, P. M.
2011-06-01
A global optimisation scheme for phase modulated proton homonuclear decoupling sequences in solid-state NMR is presented. Phase modulations, parameterised by DUMBO Fourier coefficients, were optimized using a Covariance Matrix Adaptation Evolution Strategies algorithm. Our method, denoted EASY-GOING homonuclear decoupling, starts with featureless spectra and optimises proton-proton decoupling, during either proton or carbon signal detection. On the one hand, our solutions closely resemble (e)DUMBO for moderate sample spinning frequencies and medium radio-frequency (rf) field strengths. On the other hand, the EASY-GOING approach resulted in a superior solution, achieving significantly better resolved proton spectra at very high 680 kHz rf field strength. N. Hansen, and A. Ostermeier. Evol. Comput. 9 (2001) 159-195 B. Elena, G. de Paepe, L. Emsley. Chem. Phys. Lett. 398 (2004) 532-538
An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.
2016-06-01
Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.
MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias
2018-01-31
Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.
Storms, S M; Feltus, A; Barker, A R; Joly, M-A; Girard, M
2009-03-01
Measurement of somatropin charged variants by isoelectric focusing was replaced with capillary zone electrophoresis in the January 2006 European Pharmacopoeia Supplement 5.3, based on results from an interlaboratory collaborative study. Due to incompatibilities and method-robustness issues encountered prior to verification, a number of method parameters required optimisation. As the use of a diode array detector at 195 nm or 200 nm led to a loss of resolution, a variable wavelength detector using a 200 nm filter was employed. Improved injection repeatability was obtained by increasing the injection time and pressure, and changing the sample diluent from water to running buffer. Finally, definition of capillary pre-treatment and rinse procedures resulted in more consistent separations over time. Method verification data are presented demonstrating linearity, specificity, repeatability, intermediate precision, limit of quantitation, sample stability, solution stability, and robustness. Based on these experiments, several modifications to the current method have been recommended and incorporated into the European Pharmacopoeia to help improve method performance across laboratories globally.
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2018-02-01
To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.
Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil
2006-06-01
In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
A new effective operator for the hybrid algorithm for solving global optimisation problems
NASA Astrophysics Data System (ADS)
Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac
2018-04-01
Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.
NASA Astrophysics Data System (ADS)
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
Dynamic least-cost optimisation of wastewater system remedial works requirements.
Vojinovic, Z; Solomatine, D; Price, R K
2006-01-01
In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.
CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox
NASA Astrophysics Data System (ADS)
Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano
2018-03-01
Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Guillaume, Y C; Peyrin, E
2000-03-06
A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.
Quadratic Optimisation with One Quadratic Equality Constraint
2010-06-01
This report presents a theoretical framework for minimising a quadratic objective function subject to a quadratic equality constraint. The first part of the report gives a detailed algorithm which computes the global minimiser without calling special nonlinear optimisation solvers. The second part of the report shows how the developed theory can be applied to solve the time of arrival geolocation problem.
Optimising predictor domains for spatially coherent precipitation downscaling
NASA Astrophysics Data System (ADS)
Radanovics, S.; Vidal, J.-P.; Sauquet, E.; Ben Daoud, A.; Bontron, G.
2013-10-01
Statistical downscaling is widely used to overcome the scale gap between predictors from numerical weather prediction models or global circulation models and predictands like local precipitation, required for example for medium-term operational forecasts or climate change impact studies. The predictors are considered over a given spatial domain which is rarely optimised with respect to the target predictand location. In this study, an extended version of the growing rectangular domain algorithm is proposed to provide an ensemble of near-optimum predictor domains for a statistical downscaling method. This algorithm is applied to find five-member ensembles of near-optimum geopotential predictor domains for an analogue downscaling method for 608 individual target zones covering France. Results first show that very similar downscaling performances based on the continuous ranked probability score (CRPS) can be achieved by different predictor domains for any specific target zone, demonstrating the need for considering alternative domains in this context of high equifinality. A second result is the large diversity of optimised predictor domains over the country that questions the commonly made hypothesis of a common predictor domain for large areas. The domain centres are mainly distributed following the geographical location of the target location, but there are apparent differences between the windward and the lee side of mountain ridges. Moreover, domains for target zones located in southeastern France are centred more east and south than the ones for target locations on the same longitude. The size of the optimised domains tends to be larger in the southeastern part of the country, while domains with a very small meridional extent can be found in an east-west band around 47° N. Sensitivity experiments finally show that results are rather insensitive to the starting point of the optimisation algorithm except for zones located in the transition area north of this east-west band. Results also appear generally robust with respect to the archive length considered for the analogue method, except for zones with high interannual variability like in the Cévennes area. This study paves the way for defining regions with homogeneous geopotential predictor domains for precipitation downscaling over France, and therefore de facto ensuring the spatial coherence required for hydrological applications.
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Prediction of road traffic death rate using neural networks optimised by genetic algorithm.
Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari
2015-01-01
Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
Cultural-based particle swarm for dynamic optimisation problems
NASA Astrophysics Data System (ADS)
Daneshyari, Moayed; Yen, Gary G.
2012-07-01
Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
Optimisation of logistics processes of energy grass collection
NASA Astrophysics Data System (ADS)
Bányai, Tamás.
2010-05-01
The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu
Comparative and Quantitative Global Proteomics Approaches: An Overview
Deracinois, Barbara; Flahaut, Christophe; Duban-Deweer, Sophie; Karamanos, Yannis
2013-01-01
Proteomics became a key tool for the study of biological systems. The comparison between two different physiological states allows unravelling the cellular and molecular mechanisms involved in a biological process. Proteomics can confirm the presence of proteins suggested by their mRNA content and provides a direct measure of the quantity present in a cell. Global and targeted proteomics strategies can be applied. Targeted proteomics strategies limit the number of features that will be monitored and then optimise the methods to obtain the highest sensitivity and throughput for a huge amount of samples. The advantage of global proteomics strategies is that no hypothesis is required, other than a measurable difference in one or more protein species between the samples. Global proteomics methods attempt to separate quantify and identify all the proteins from a given sample. This review highlights only the different techniques of separation and quantification of proteins and peptides, in view of a comparative and quantitative global proteomics analysis. The in-gel and off-gel quantification of proteins will be discussed as well as the corresponding mass spectrometry technology. The overview is focused on the widespread techniques while keeping in mind that each approach is modular and often recovers the other. PMID:28250403
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin
2007-05-09
A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.
A Method for Decentralised Optimisation in Networks
NASA Astrophysics Data System (ADS)
Saramäki, Jari
2005-06-01
We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.
Higton, D M
2001-01-01
An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Optimising operational amplifiers by evolutionary algorithms and gm/Id method
NASA Astrophysics Data System (ADS)
Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.
2016-10-01
The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
NASA Astrophysics Data System (ADS)
Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue
2016-11-01
This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.
Lévy flight artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She
2016-08-01
Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Stochastic optimisation of water allocation on a global scale
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.
2014-05-01
Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
Convex relaxations for gas expansion planning
Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...
2016-01-01
Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522
NASA Astrophysics Data System (ADS)
Mallick, S.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2016-07-01
This paper proposes a novel hybrid optimisation algorithm which combines the recently proposed evolutionary algorithm Backtracking Search Algorithm (BSA) with another widely accepted evolutionary algorithm, namely, Differential Evolution (DE). The proposed algorithm called BSA-DE is employed for the optimal designs of two commonly used analogue circuits, namely Complementary Metal Oxide Semiconductor (CMOS) differential amplifier circuit with current mirror load and CMOS two-stage operational amplifier (op-amp) circuit. BSA has a simple structure that is effective, fast and capable of solving multimodal problems. DE is a stochastic, population-based heuristic approach, having the capability to solve global optimisation problems. In this paper, the transistors' sizes are optimised using the proposed BSA-DE to minimise the areas occupied by the circuits and to improve the performances of the circuits. The simulation results justify the superiority of BSA-DE in global convergence properties and fine tuning ability, and prove it to be a promising candidate for the optimal design of the analogue CMOS amplifier circuits. The simulation results obtained for both the amplifier circuits prove the effectiveness of the proposed BSA-DE-based approach over DE, harmony search (HS), artificial bee colony (ABC) and PSO in terms of convergence speed, design specifications and design parameters of the optimal design of the analogue CMOS amplifier circuits. It is shown that BSA-DE-based design technique for each amplifier circuit yields the least MOS transistor area, and each designed circuit is shown to have the best performance parameters such as gain, power dissipation, etc., as compared with those of other recently reported literature.
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
Global root zone storage capacity from satellite-based evaporation data
NASA Astrophysics Data System (ADS)
Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert
2016-04-01
We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.
Optimisation of assembly scheduling in VCIM systems using genetic algorithm
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-09-01
Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.
Relative electronic and free energies of octane's unique conformations
NASA Astrophysics Data System (ADS)
Kirschner, Karl N.; Heiden, Wolfgang; Reith, Dirk
2017-06-01
This study reports the geometries and electronic energies of n-octane's unique conformations using perturbation methods that best mimic CCSD(T) results. In total, the fully optimised minima of n-butane (2 conformations), n-pentane (4 conformations), n-hexane (12 conformations) and n-octane (96 conformations) were investigated at several different theory levels and basis sets. We find that DF-MP2.5/aug-cc-pVTZ is in very good agreement with the more expensive CCSD(T) results. At this level, we can clearly confirm the 96 stable minima which were previously found using a reparameterised density functional theory (DFT). Excellent agreement was found between their DFT results and our DF-MP2.5 perturbation results. Subsequent Gibbs free energy calculations, using scaled MP2/aug-cc-pVTZ zero-point vibrational energy and frequencies, indicate a significant temperature dependency of the relative energies, with a change in the predicted global minimum. The results of this work will be important for future computational investigations of fuel-related octane reactions and for optimisation of molecular force fields (e.g. lipids).
NASA Astrophysics Data System (ADS)
Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng
2018-04-01
It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.
Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A
2014-11-01
In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.
NASA Astrophysics Data System (ADS)
Astley, R. J.; Sugimoto, R.; Mustafi, P.
2011-08-01
Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
Coil optimisation for transcranial magnetic stimulation in realistic head geometry.
Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J
Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project
2005-05-01
thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Strain, William David; Paldánius, Päivi M
2017-08-01
The last decade has witnessed the role of dipeptidyl peptidase-4 (DPP-4) inhibitors in producing a conceptual change in early management of type 2 diabetes mellitus (T2DM) by shifting emphasis from a gluco-centric approach to holistically treating underlying pathophysiological processes. DPP-4 inhibitors highlighted the importance of acknowledging hypoglycaemia and weight gain as barriers to optimised care in T2DM. These complications were an integral part of diabetes management before the introduction of DPP-4 inhibitors. During the development of DPP-4 inhibitors, regulatory requirements for introducing new agents underwent substantial changes, with increased emphasis on safety. This led to the systematic collection of adjudicated cardiovascular (CV) safety data, and, where 95% confidence of a lack of harm could not be demonstrated, the standardised CV safety studies. Furthermore, the growing awareness of the worldwide extent of T2DM demanded a more diverse approach to recruitment and participation in clinical trials. Finally, the global financial crisis placed a new awareness on the health economics of diabetes, which rapidly became the most expensive disease in the world. This review encompasses unique developments in the global landscape, and the role DPP-4 inhibitors, specifically vildagliptin, have played in research advancement and optimisation of diabetes care in a diverse population with T2DM worldwide.
Paldánius, Päivi M
2017-01-01
Abstract The last decade has witnessed the role of dipeptidyl peptidase-4 (DPP-4) inhibitors in producing a conceptual change in early management of type 2 diabetes mellitus (T2DM) by shifting emphasis from a gluco-centric approach to holistically treating underlying pathophysiological processes. DPP-4 inhibitors highlighted the importance of acknowledging hypoglycaemia and weight gain as barriers to optimised care in T2DM. These complications were an integral part of diabetes management before the introduction of DPP-4 inhibitors. During the development of DPP-4 inhibitors, regulatory requirements for introducing new agents underwent substantial changes, with increased emphasis on safety. This led to the systematic collection of adjudicated cardiovascular (CV) safety data, and, where 95% confidence of a lack of harm could not be demonstrated, the standardised CV safety studies. Furthermore, the growing awareness of the worldwide extent of T2DM demanded a more diverse approach to recruitment and participation in clinical trials. Finally, the global financial crisis placed a new awareness on the health economics of diabetes, which rapidly became the most expensive disease in the world. This review encompasses unique developments in the global landscape, and the role DPP-4 inhibitors, specifically vildagliptin, have played in research advancement and optimisation of diabetes care in a diverse population with T2DM worldwide. PMID:29632609
Global root zone storage capacity from satellite-based evaporation
NASA Astrophysics Data System (ADS)
Wang-Erlandsson, Lan; Bastiaanssen, Wim G. M.; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel B.; van Dijk, Albert I. J. M.; Guerschman, Juan P.; Keys, Patrick W.; Gordon, Line J.; Savenije, Hubert H. G.
2016-04-01
This study presents an "Earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale independent. In contrast to a traditional look-up table approach, our method captures the variability in root zone storage capacity within land cover types, including in rainforests where direct measurements of root depths otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM (Simple Terrestrial Evaporation to Atmosphere Model) improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. Our results suggest that several forest types are able to create a large storage to buffer for severe droughts (with a very long return period), in contrast to, for example, savannahs and woody savannahs (medium length return period), as well as grasslands, shrublands, and croplands (very short return period). The presented method to estimate root zone storage capacity eliminates the need for poor resolution soil and rooting depth data that form a limitation for achieving progress in the global land surface modelling community.
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
Smaggus, Andrew; Mrkobrada, Marko; Marson, Alanna; Appleton, Andrew
2018-01-01
The quality and safety movement has reinvigorated interest in optimising morbidity and mortality (M&M) rounds. We performed a systematic review to identify effective means of updating M&M rounds to (1) identify and address quality and safety issues, and (2) address contemporary educational goals. Relevant databases (Medline, Embase, PubMed, Education Resource Information Centre, Cumulative Index to Nursing and Allied Health Literature, Healthstar, and Global Health) were searched to identify primary sources. Studies were included if they (1) investigated an intervention applied to M&M rounds, (2) reported outcomes relevant to the identification of quality and safety issues, or educational outcomes relevant to quality improvement (QI), patient safety or general medical education and (3) included a control group. Study quality was assessed using the Medical Education Research Study Quality Instrument and Newcastle-Ottawa Scale-Education instruments. Given the heterogeneity of interventions and outcome measures, results were analysed thematically. The final analysis included 19 studies. We identified multiple effective strategies (updating objectives, standardising elements of rounds and attaching rounds to a formal quality committee) to optimise M&M rounds for a QI/safety purpose. These efforts were associated with successful integration of quality and safety content into rounds, and increased implementation of QI interventions. Consistent effects on educational outcomes were difficult to identify, likely due to the use of methodologies ill-fitted for educational research. These results are encouraging for those seeking to optimise the quality and safety mission of M&M rounds. However, the inability to identify consistent educational effects suggests the investigation of M&M rounds could benefit from additional methodologies (qualitative, mixed methods) in order to understand the complex mechanisms driving learning at M&M rounds. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
Bradley, Beverly D; Jung, Tiffany; Tandon-Verma, Ananya; Khoury, Bassem; Chan, Timothy C Y; Cheng, Yu-Ling
2017-04-18
Operations research (OR) is a discipline that uses advanced analytical methods (e.g. simulation, optimisation, decision analysis) to better understand complex systems and aid in decision-making. Herein, we present a scoping review of the use of OR to analyse issues in global health, with an emphasis on health equity and research impact. A systematic search of five databases was designed to identify relevant published literature. A global overview of 1099 studies highlights the geographic distribution of OR and common OR methods used. From this collection of literature, a narrative description of the use of OR across four main application areas of global health - health systems and operations, clinical medicine, public health and health innovation - is also presented. The theme of health equity is then explored in detail through a subset of 44 studies. Health equity is a critical element of global health that cuts across all four application areas, and is an issue particularly amenable to analysis through OR. Finally, we present seven select cases of OR analyses that have been implemented or have influenced decision-making in global health policy or practice. Based on these cases, we identify three key drivers for success in bridging the gap between OR and global health policy, namely international collaboration with stakeholders, use of contextually appropriate data, and varied communication outlets for research findings. Such cases, however, represent a very small proportion of the literature found. Poor availability of representative and quality data, and a lack of collaboration between those who develop OR models and stakeholders in the contexts where OR analyses are intended to serve, were found to be common challenges for effective OR modelling in global health.
NASA Astrophysics Data System (ADS)
Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan
2016-02-01
In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
Use of game-theoretical methods in biochemistry and biophysics.
Schuster, Stefan; Kreft, Jan-Ulrich; Schroeter, Anja; Pfeiffer, Thomas
2008-04-01
Evolutionary game theory can be considered as an extension of the theory of evolutionary optimisation in that two or more organisms (or more generally, units of replication) tend to optimise their properties in an interdependent way. Thus, the outcome of the strategy adopted by one species (e.g., as a result of mutation and selection) depends on the strategy adopted by the other species. In this review, the use of evolutionary game theory for analysing biochemical and biophysical systems is discussed. The presentation is illustrated by a number of instructive examples such as the competition between microorganisms using different metabolic pathways for adenosine triphosphate production, the secretion of extracellular enzymes, the growth of trees and photosynthesis. These examples show that, due to conflicts of interest, the global optimum (in the sense of being the best solution for the whole system) is not always obtained. For example, some yeast species use metabolic pathways that waste nutrients, and in a dense tree canopy, trees grow taller than would be optimal for biomass productivity. From the viewpoint of game theory, the examples considered can be described by the Prisoner's Dilemma, snowdrift game, Tragedy of the Commons and rock-scissors-paper game.
The seasonal behaviour of carbon fluxes in the Amazon: fusion of FLUXNET data and the ORCHIDEE model
NASA Astrophysics Data System (ADS)
Verbeeck, H.; Peylin, P.; Bacour, C.; Ciais, P.
2009-04-01
Eddy covariance measurements at the Santarém (km 67) site revealed an unexpected seasonal pattern in carbon fluxes which could not be simulated by existing state-of-the-art global ecosystem models (Saleska et al., Sciece 2003). An unexpected high carbon uptake was measured during dry season. In contrast, carbon release was observed in the wet season. There are several possible (combined) underlying mechanisms of this phenomenon: (1) an increased soil respiration due to soil moisture in the wet season, (2) increased photosynthesis during the dry season due to deep rooting, hydraulic lift, increased radiation and/or a leaf flush. The objective of this study is to optimise the ORCHIDEE model using eddy covariance data in order to be able to mimic the seasonal response of carbon fluxes to dry/wet conditions in tropical forest ecosystems. By doing this, we try to identify the underlying mechanisms of this seasonal response. The ORCHIDEE model is a state of the art mechanistic global vegetation model that can be run at local or global scale. It calculates the carbon and water cycle in the different soil and vegetation pools and resolves the diurnal cycle of fluxes. ORCHIDEE is built on the concept of plant functional types (PFT) to describe vegetation. To bring the different carbon pool sizes to realistic values, spin-up runs are used. ORCHIDEE uses climate variables as drivers together with a number of ecosystem parameters that have been assessed from laboratory and in situ experiments. These parameters are still associated with a large uncertainty and may vary between and within PFTs in a way that is currently not informed or captured by the model. Recently, the development of assimilation techniques allows the objective use of eddy covariance data to improve our knowledge of these parameters in a statistically coherent approach. We use a Bayesian optimisation approach. This approach is based on the minimization of a cost function containing the mismatch between simulated model output and observations as well as the mismatch between a priori and optimized parameters. The parameters can be optimized on different time scales (annually, monthly, daily). For this study the model is optimised at local scale for 5 eddy flux sites: 4 sites in Brazil and one in French Guyana. The seasonal behaviour of C fluxes in response to wet and dry conditions differs among these sites. Key processes that are optimised include: the effect of the soil water on heterotrophic soil respiration, the effect of soil water availability on stomatal conductance and photosynthesis, and phenology. By optimising several key parameters we could improve the simulation of the seasonal pattern of NEE significantly. Nevertheless, posterior parameters should be interpreted with care, because resulting parameter values might compensate for uncertainties on the model structure or other parameters. Moreover, several critical issues appeared during this study e.g. how to assimilate latent and sensible heat data, when the energy balance is not closed in the data? Optimisation of the Q10 parameter showed that on some sites respiration was not sensitive at all to temperature, which show only small variations in this region. Considering this, one could question the reliability of the partitioned fluxes (GPP/Reco) at these sites. This study also tests if there is coherence between optimised parameter values of different sites within the tropical forest PFT and if the forward model response to climate variations is similar between sites.
Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian
2005-01-01
This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.
Integration of environmental aspects in modelling and optimisation of water supply chains.
Koleva, Mariya N; Calderón, Andrés J; Zhang, Di; Styan, Craig A; Papageorgiou, Lazaros G
2018-04-26
Climate change becomes increasingly more relevant in the context of water systems planning. Tools are necessary to provide the most economic investment option considering the reliability of the infrastructure from technical and environmental perspectives. Accordingly, in this work, an optimisation approach, formulated as a spatially-explicit multi-period Mixed Integer Linear Programming (MILP) model, is proposed for the design of water supply chains at regional and national scales. The optimisation framework encompasses decisions such as installation of new purification plants, capacity expansion, and raw water trading schemes. The objective is to minimise the total cost incurring from capital and operating expenditures. Assessment of available resources for withdrawal is performed based on hydrological balances, governmental rules and sustainable limits. In the light of the increasing importance of reliability of water supply, a second objective, seeking to maximise the reliability of the supply chains, is introduced. The epsilon-constraint method is used as a solution procedure for the multi-objective formulation. Nash bargaining approach is applied to investigate the fair trade-offs between the two objectives and find the Pareto optimality. The models' capability is addressed through a case study based on Australia. The impact of variability in key input parameters is tackled through the implementation of a rigorous global sensitivity analysis (GSA). The findings suggest that variations in water demand can be more disruptive for the water supply chain than scenarios in which rainfalls are reduced. The frameworks can facilitate governmental multi-aspect decision making processes for the adequate and strategic investments of regional water supply infrastructure. Copyright © 2018. Published by Elsevier B.V.
An improved design method based on polyphase components for digital FIR filters
NASA Astrophysics Data System (ADS)
Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No
2017-11-01
This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2014-01-01
Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.
Global health diplomacy, 'smart power', and the new world order.
Kevany, Sebastian
2014-01-01
Both the theory and practice of foreign policy and diplomacy, including systems of hard and soft power, are undergoing paradigm shifts, with an increasing number of innovative actors and strategies contributing to international relations outcomes in the 'New World Order'. Concurrently, global health programmes continue to ascend the political spectrum in scale, scope and influence. This concatenation of circumstances has demanded a re-examination of the existing and potential effectiveness of global health programmes in the 'smart power' context, based on adherence to a range of design, implementation and assessment criteria, which may simultaneously optimise their humanitarian, foreign policy and diplomatic effectiveness. A synthesis of contemporary characteristics of 'global health diplomacy' and 'global health as foreign policy', grouped by common themes and generated in the context of related field experiences, are presented in the form of 'Top Ten' criteria lists for optimising both diplomatic and foreign policy effectiveness of global health programmes, and criteria are presented in concert with an examination of implications for programme design and delivery. Key criteria for global health programmes that are sensitised to both diplomatic and foreign policy goals include visibility, sustainability, geostrategic considerations, accountability, effectiveness and alignment with broader policy objectives. Though diplomacy is a component of foreign policy, criteria for 'diplomatically-sensitised' versus 'foreign policy-sensitised' global health programmes were not always consistent, and were occasionally in conflict, with each other. The desirability of making diplomatic and foreign policy criteria explicit, rather than implicit, in the context of global health programme design, delivery and evaluation are reflected in the identified implications for (1) international security, (2) programme evaluation, (3) funding and resource allocation decisions, (4) approval systems and (5) training. On this basis, global health programmes are shown to provide a valuable, yet underutilised, tool for diplomacy and foreign policy purposes, including their role in the pursuit of benign international influence. A corresponding alignment of resources between 'hard' and 'smart' power options is encouraged.
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Arjunan, V; Raj, Arushma; Ravindran, P; Mohan, S
2014-01-24
The vibrational fundamental modes of 2-(methylthio)benzimidazole (2MTBI) have been analysed by combining FTIR, FT-Raman and quantum chemical calculations. The structural parameters of the compound are determined from the optimised geometry by B3LYP with 6-31G(∗∗), 6-311++G(∗∗) and cc-pVTZ basis sets and giving energies, harmonic vibrational frequencies, depolarisation ratios, IR intensities and Raman activities. (1)H and (13)C NMR spectra have been analysed and (1)H and (13)C nuclear magnetic resonance chemical shifts are calculated using the gauge independent atomic orbital (GIAO) method. The structure-activity relationship of the compound is also investigated by conceptual DFT methods. The chemical reactivity and site selectivity of the molecule has been determined with the help of global and local reactivity descriptors. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.
NASA Astrophysics Data System (ADS)
Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.
2017-11-01
Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
Chiroma, Haruna; Abdul-kareem, Sameem; Khan, Abdullah; Nawi, Nazri Mohd.; Gital, Abdulsalam Ya’u; Shuib, Liyana; Abubakar, Adamu I.; Rahman, Muhammad Zubair; Herawan, Tutut
2015-01-01
Background Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research. Methods/Findings The OPEC CO2 emissions data were collected from the Energy Information Administration. Artificial Neural Network (ANN) adaptability and performance motivated its choice for this study. To improve effectiveness of the ANN, the cuckoo search algorithm was hybridised with accelerated particle swarm optimisation for training the ANN to build a model for the prediction of OPEC CO2 emissions. The proposed model predicts OPEC CO2 emissions for 3, 6, 9, 12 and 16 years with an improved accuracy and speed over the state-of-the-art methods. Conclusion An accurate prediction of OPEC CO2 emissions can serve as a reference point for propagating the reorganisation of economic development in OPEC member countries with the view of reducing CO2 emissions to Kyoto benchmarks—hence, reducing global warming. The policy implications are discussed in the paper. PMID:26305483
Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
NASA Astrophysics Data System (ADS)
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
Global Corporate Priorities and Demand-Led Learning Strategies
ERIC Educational Resources Information Center
Dealtry, Richard
2008-01-01
Purpose: The purpose of this article is to start the process of exploring how to optimise connections between the strategic needs of an organisation as directed by top management and its learning management structures and strategies. Design/methodology/approach: The article takes a broad brush approach to a complex and large subject area that is…
Optimising Meritocratic Advantage with the International Baccalaureate Diploma in Australian Schools
ERIC Educational Resources Information Center
Doherty, Catherine
2012-01-01
This paper explores two of the tensions Tarc identifies in the history of the International Baccalaureate (IB) Diploma: firstly, between its design for meritocratic competition and its internationalist vision and, secondly, between the IB as a global commodity and its localised interpretations. Using data from three case studies of Australian…
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
rPM6 parameters for phosphorous and sulphur-containing open-shell molecules
NASA Astrophysics Data System (ADS)
Saito, Toru; Takano, Yu
2018-03-01
In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.
A target recognition method for maritime surveillance radars based on hybrid ensemble selection
NASA Astrophysics Data System (ADS)
Fan, Xueman; Hu, Shengliang; He, Jingbo
2017-11-01
In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.
Chiroma, Haruna; Abdul-kareem, Sameem; Khan, Abdullah; Nawi, Nazri Mohd; Gital, Abdulsalam Ya'u; Shuib, Liyana; Abubakar, Adamu I; Rahman, Muhammad Zubair; Herawan, Tutut
2015-01-01
Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research. The OPEC CO2 emissions data were collected from the Energy Information Administration. Artificial Neural Network (ANN) adaptability and performance motivated its choice for this study. To improve effectiveness of the ANN, the cuckoo search algorithm was hybridised with accelerated particle swarm optimisation for training the ANN to build a model for the prediction of OPEC CO2 emissions. The proposed model predicts OPEC CO2 emissions for 3, 6, 9, 12 and 16 years with an improved accuracy and speed over the state-of-the-art methods. An accurate prediction of OPEC CO2 emissions can serve as a reference point for propagating the reorganisation of economic development in OPEC member countries with the view of reducing CO2 emissions to Kyoto benchmarks--hence, reducing global warming. The policy implications are discussed in the paper.
Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C
2018-06-01
Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
Song, Jing-Zheng; Han, Quan-Bin; Qiao, Chun-Feng; But, Paul Pui-Hay; Xu, Hong-Xi
2010-01-01
Aconites, with aconite alkaloids as the major therapeutic and toxic components, are used for the treatment of analgesic, antirheumatic and neurological symptoms. Quantification of the aconite alkaloids is important for the quality control of aconite-containing drugs. To establish a validated capillary zone electrophoresis (CZE) method for the simultaneous determination of six major alkaloids, namely aconitine, mesaconitine, hypaconitine, benzoylaconine, benzoylmesaconine and benzoylhypaconine, in crude and processed aconite roots. The CZE method was optimised and validated using a stability-indicating method. The optimised running buffer was a mixture of 200 mm Tris, 150 mm perchloric acid and 40% 1,4-dioxane (pH 7.8) with the capillary thermostated at 25 degrees C. Using the optimised method, six aconite alkaloids were well separated. The established method showed good precision, accuracy and recovery. Contents of these alkaloids in crude and processed aconites were determined and it was observed that the levels of individual alkaloids varied between samples. The developed CZE method was reliable for the quality control of aconites contained in herbal medicines. The method could also be used as an approach for toxicological studies.
Le, Van So; Do, Zoe Phuc-Hien; Le, Minh Khoi; Le, Vicki; Le, Natalie Nha-Truc
2014-06-10
Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of "early elution schedule" was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement certainty with respect to optimising decay/measurement time and product sample activity used for QC quality control. The optimisation ensures a certainty of measurement of the specific impure radionuclide and avoids wasting the useful amount of valuable purified/concentrated daughter nuclide product. This process is important for the spectrometric measurement of very low activity of impure radionuclide contamination in the radioisotope products of much higher activity used in medical imaging and targeted radiotherapy.
Comparison of simulation and experimental results for a gas puff nozzle on Ambiorix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnier, J-N.; Chevalier, J-M.; Dubroca, B.
One of source term of Z-Pinch experiments is the gas puff density profile. In order to characterize the gas jet, an experiment based on interferometry has been performed. The first study was a point measurement (a section density profile) which led us to develop a global and instantaneous interferometry imaging method. In order to optimise the nozzle, we simulated the experiment with a flow calculation code (ARES). In this paper, the experimental results are compared with simulations. The different gas properties (He, Ne, Ar) and the flow duration lead us to take care, on the one hand, of the gasmore » viscosity, and on the other, of modifying the code for an instationary flow.« less
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
Reverse engineering a gene network using an asynchronous parallel evolution strategy
2010-01-01
Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855
Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino
2017-09-19
The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.
Optimisation des trajectoires verticales par la methode de la recherche de l'harmonie =
NASA Astrophysics Data System (ADS)
Ruby, Margaux
Face au rechauffement climatique, les besoins de trouver des solutions pour reduire les emissions de CO2 sont urgentes. L'optimisation des trajectoires est un des moyens pour reduire la consommation de carburant lors d'un vol. Afin de determiner la trajectoire optimale de l'avion, differents algorithmes ont ete developpes. Le but de ces algorithmes est de reduire au maximum le cout total d'un vol d'un avion qui est directement lie a la consommation de carburant et au temps de vol. Un autre parametre, nomme l'indice de cout est considere dans la definition du cout de vol. La consommation de carburant est fournie via des donnees de performances pour chaque phase de vol. Dans le cas de ce memoire, les phases d'un vol complet, soit, une phase de montee, une phase de croisiere et une phase de descente, sont etudies. Des " marches de montee " etaient definies comme des montees de 2 000ft lors de la phase de croisiere sont egalement etudiees. L'algorithme developpe lors de ce memoire est un metaheuristique, nomme la recherche de l'harmonie, qui, concilie deux types de recherches : la recherche locale et la recherche basee sur une population. Cet algorithme se base sur l'observation des musiciens lors d'un concert, ou plus exactement sur la capacite de la musique a trouver sa meilleure harmonie, soit, en termes d'optimisation, le plus bas cout. Differentes donnees d'entrees comme le poids de l'avion, la destination, la vitesse de l'avion initiale et le nombre d'iterations doivent etre, entre autre, fournies a l'algorithme pour qu'il soit capable de determiner la solution optimale qui est definie comme : [Vitesse de montee, Altitude, Vitesse de croisiere, Vitesse de descente]. L'algorithme a ete developpe a l'aide du logiciel MATLAB et teste pour plusieurs destinations et plusieurs poids pour un seul type d'avion. Pour la validation, les resultats obtenus par cet algorithme ont ete compares dans un premier temps aux resultats obtenus suite a une recherche exhaustive qui a utilisee toutes les combinaisons possibles. Cette recherche exhaustive nous a fourni l'optimal global; ainsi, la solution de notre algorithme doit se rapprocher le plus possible de la recherche exhaustive afin de prouver qu'il donne des resultats proche de l'optimal global. Une seconde comparaison a ete effectuee entre les resultats fournis par l'algorithme et ceux du Flight Management System (FMS) qui est un systeme d'avionique situe dans le cockpit de l'avion fournissant la route a suivre afin d'optimiser la trajectoire. Le but est de prouver que l'algorithme de la recherche de l'harmonie donne de meilleurs resultats que l'algorithme implemente dans le FMS.
Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy
NASA Astrophysics Data System (ADS)
Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.
2017-08-01
We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95 <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase <200 ms and for changes in the breathing period of <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.
Global optimization of multimode interference structure for ratiometric wavelength measurement
NASA Astrophysics Data System (ADS)
Wang, Qian; Farrell, Gerald; Hatta, Agus Muhamad
2007-07-01
The multimode interference structure is conventionally used as a splitter/combiner. In this paper, it is optimised as an edge filter for ratiometric wavelength measurement, which can be used in demodulation of fiber Bragg grating sensing. The global optimization algorithm-adaptive simulated annealing is introduced in the design of multimode interference structure including the length and width of the multimode waveguide section, and positions of the input and output waveguides. The designed structure shows a suitable spectral response for wavelength measurement and a good fabrication tolerance.
ERIC Educational Resources Information Center
Yasumoto, Seiko
2014-01-01
"Blended learning" has been attracting academic interest catalysed by the advance of mixed-media technology and has significance for the global educational community and evolutionary development of pedagogical approaches to optimise student learning. This paper examines one aspect of blended teaching of Japanese language and culture in…
Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.
Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K
2012-06-01
Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.
Evaluation of a High Throughput Starch Analysis Optimised for Wood
Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco
2014-01-01
Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E
2018-04-09
Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.
High-resolution global irradiance monitoring from photovoltaic systems
NASA Astrophysics Data System (ADS)
Buchmann, Tina; Pfeilsticker, Klaus; Siegmund, Alexander; Meilinger, Stefanie; Mayer, Bernhard; Pinitz, Sven; Steinbrecht, Wolfgang
2016-04-01
Reliable and regional differentiated power forecasts are required to guarantee an efficient and economic energy transition towards renewable energies. Amongst other renewable energy technologies, e.g. wind mills, photovoltaic systems are an essential component of this transition being cost-efficient and simply to install. Reliable power forecasts are however required for a grid integration of photovoltaic systems, which among other data requires high-resolution spatio-temporal global irradiance data. Hence the generation of robust reviewed global irradiance data is an essential contribution for the energy transition. To achieve this goal our studies introduce a novel method which makes use of photovoltaic power generation in order to infer global irradiance. The method allows to determine high-resolution temporal global irradiance data (one data point every 15 minutes at each location) from power data of operated photovoltaic systems. Due to the multitude of installed photovoltaic systems (in Germany) the detailed spatial coverage is much better than for example only using global irradiance data from conventional pyranometer networks (e.g. from the German Weather Service). Our designated method is composed of two components: a forward component, i.e. to conclude from predicted global irradiance to photovoltaic (PV) power, and a backward component, i.e. from PV power with suitable calibration to global irradiance. The forward process is modelled by using the radiation transport model libRadtran (B. Mayer and A. Kylling (1)) for clear skies to obtain the characteristics (orientation, size, temperature dependence, …) of individual PV systems. For PV systems in the vicinity of a meteorological station, these data are validated against calibrated pyranometer readings. The forward-modelled global irradiance is used to determine the power efficiency for each photovoltaic system using non-linear optimisation techniques. The backward component uses the power efficiency and meteorological parameters (e.g. from the model COSMO-DE) to calculate global irradiance by means of the generated power of individual photovoltaic systems. For the year 2012, our method is tested for PV systems in the Allgäu region (south Germany), the distribution area of the system operator "AllgäuNetz GmbH & Co". The test region includes 215 online-monitored photovoltaic systems and one pyranometer station located at the DWD (Deutscher WetterDienst) weather station Hohenpeißenberg (operated by the German Weather Service). The present talk provides an introduction to the newly developed method along with first results for clear sky scenarios. (1) B. Mayer and A. Kylling (2005): Technical note: The libRadtran software package for radiative transfer calculations - description and examples of use. In: Chemistry and Physics Chemistry and Physics. Page: 1855 - 1877
Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.
Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P
2017-01-01
To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Kumaravel
2017-02-01
The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method
NASA Astrophysics Data System (ADS)
Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.
2015-01-01
Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds (VOC). Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. Thus a method was systematically characterised and improved to quantify carbonyl compounds. Quantification with the present method can be carried out for each carbonyl compound sampled in the aqueous phase regardless of their source. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). The main advantage of the improved method presented in this study is the low detection limit in the range of 0.01 and 0.17 μmol L-1 depending on carbonyl compounds. Furthermore best results were found for extraction with dichloromethane for 30 min followed by derivatisation with PFBHA for 24 h with 0.43 mg mL-1 PFBHA at a pH value of 3. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione.
NASA Astrophysics Data System (ADS)
Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.
2015-06-01
Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds. Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). With the present method quantification can be carried out for each carbonyl compound originating from fog, cloud and rain or sampled from the gas- and particle phase in water. Detection limits between 0.01 and 0.17 μmol L-1 were found, depending on carbonyl compounds. Furthermore, best results were found for the derivatisation with a PFBHA concentration of 0.43 mg mL-1 for 24 h followed by a subsequent extraction with dichloromethane for 30 min at pH = 1. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione after a reaction time of 5 h.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
Resource acquisition, distribution and end-use efficiencies and the growth of industrial society
NASA Astrophysics Data System (ADS)
Jarvis, A. J.; Jarvis, S. J.; Hewitt, C. N.
2015-10-01
A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end-use. With respect to energy, the growth of industrial society appears to have been near-exponential for the last 160 years. We provide evidence that indicates that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near-optimal directed networks (roads, railways, flight paths, pipelines, cables etc.). However, despite this continual striving for optimisation, the distribution efficiencies of these networks must decline over time as they expand due to path lengths becoming longer and more tortuous. Therefore, to maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system, namely at the points of acquisition and end-use of resources. We postulate that the maintenance of the growth of industrial society, as measured by global energy use, at the observed rate of ~ 2.4 % yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.
Fox-7 for Insensitive Boosters
2010-08-01
cavitation , and therefore nucleation, to occur at each frequency. As well as producing ultrasound at different frequencies, the method of delivery of...processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology to improve booster formulations, and results from these...7 booster formulations. Also included are particle processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
Neural network feedforward control of a closed-circuit wind tunnel
NASA Astrophysics Data System (ADS)
Sutcliffe, Peter
Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the derived NNMPC controller provided an excellent level of control over the test-section temperature in adherence to a reference trajectory even when faced with unforeseen disturbances such as rapid changes in the operating environment.
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
NASA Astrophysics Data System (ADS)
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-23
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
NASA Astrophysics Data System (ADS)
Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.
2018-02-01
The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-01-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Achieving optimal SERS through enhanced experimental design.
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston
2016-01-01
One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.
Treatment planning optimisation in proton therapy
McGowan, S E; Burnet, N G; Lomax, A J
2013-01-01
ABSTRACT. The goal of radiotherapy is to achieve uniform target coverage while sparing normal tissue. In proton therapy, the same sources of geometric uncertainty are present as in conventional radiotherapy. However, an important and fundamental difference in proton therapy is that protons have a finite range, highly dependent on the electron density of the material they are traversing, resulting in a steep dose gradient at the distal edge of the Bragg peak. Therefore, an accurate knowledge of the sources and magnitudes of the uncertainties affecting the proton range is essential for producing plans which are robust to these uncertainties. This review describes the current knowledge of the geometric uncertainties and discusses their impact on proton dose plans. The need for patient-specific validation is essential and in cases of complex intensity-modulated proton therapy plans the use of a planning target volume (PTV) may fail to ensure coverage of the target. In cases where a PTV cannot be used, other methods of quantifying plan quality have been investigated. A promising option is to incorporate uncertainties directly into the optimisation algorithm. A further development is the inclusion of robustness into a multicriteria optimisation framework, allowing a multi-objective Pareto optimisation function to balance robustness and conformity. The question remains as to whether adaptive therapy can become an integral part of a proton therapy, to allow re-optimisation during the course of a patient's treatment. The challenge of ensuring that plans are robust to range uncertainties in proton therapy remains, although these methods can provide practical solutions. PMID:23255545
Sterckx, Femke L; Saison, Daan; Delvaux, Freddy R
2010-08-31
Monophenols are widely spread compounds contributing to the flavour of many foods and beverages. They are most likely present in beer, but so far, little is known about their influence on beer flavour. To quantify these monophenols in beer, we optimised a headspace solid-phase microextraction method coupled to gas chromatography-mass spectrometry. To improve their isolation from the beer matrix and their chromatographic properties, the monophenols were acetylated using acetic anhydride and KHCO(3) as derivatising agent and base catalyst, respectively. Derivatisation conditions were optimised with attention for the pH of the reaction medium. Additionally, different parameters affecting extraction efficiency were optimised, including fibre coating, extraction time and temperature and salt addition. Afterwards, we calibrated and validated the method successfully and applied it for the analysis of monophenols in beer samples. 2010 Elsevier B.V. All rights reserved.
Dwell time-based stabilisation of switched delay systems using free-weighting matrices
NASA Astrophysics Data System (ADS)
Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay
2018-01-01
In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.
Wiener-Hammerstein system identification - an evolutionary approach
NASA Astrophysics Data System (ADS)
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
NASA Astrophysics Data System (ADS)
Arjunan, V.; Devi, L.; Mohan, S.
2018-05-01
The FT-IR and FT-Raman spectra of 4-trifluoromethylbenzylamine (TFMBA) have been recorded in the range 4000-450 and 4000-100 cm-1 respectively. The conformational analysis of the compound has been carried out to attain stable geometry of the compound. The complete vibrational assignment and analysis of the fundamental modes of the compound are carried out using the experimental FTIR and FT-Raman data and quantum chemical studies. The experimental vibrational frequencies are compared with the wavenumbers obtained theoretically from the B3LYP gradient calculations employing the standard high level 6-311++G** and cc-pVTZ basis sets for the optimised geometry of the compound. The structural parameters, thermodynamic properties and vibrational frequencies of the normal modes obtained from the B3LYP methods are in good agreement with the experimental data. The 1H (400 MHz; CDCl3) and 13C (100 MHz; CDCl3) nuclear magnetic resonance (NMR) spectra were also recorded. The electronic properties, highest occupied molecular orbital and lowest unoccupied molecular orbital energies are measured by DFT approach. The charges of the atoms by natural bond orbital (NBO) analysis are determined by B3LYP/cc-pVTZ method. The structure-chemical reactivity relations of the compound are determined through chemical potential, global hardness, global softness, electronegativity, electrophilicity and local reactivity descriptors by conceptual DFT methods.
Aslan, Mikail; Davis, Jack B A; Johnston, Roy L
2016-03-07
The global optimisation of small bimetallic PdCo binary nanoalloys are systematically investigated using the Birmingham Cluster Genetic Algorithm (BCGA). The effect of size and composition on the structures, stability, magnetic and electronic properties including the binding energies, second finite difference energies and mixing energies of Pd-Co binary nanoalloys are discussed. A detailed analysis of Pd-Co structural motifs and segregation effects is also presented. The maximal mixing energy corresponds to Pd atom compositions for which the number of mixed Pd-Co bonds is maximised. Global minimum clusters are distinguished from transition states by vibrational frequency analysis. HOMO-LUMO gap, electric dipole moment and vibrational frequency analyses are made to enable correlation with future experiments.
ERIC Educational Resources Information Center
Reinfried, Sibylle; Tempelmann, Sebastian
2014-01-01
This paper provides a video-based learning process study that investigates the kinds of mental models of the atmospheric greenhouse effect 13-year-old learners have and how these mental models change with a learning environment, which is optimised in regard to instructional psychology. The objective of this explorative study was to observe and…
Optimisation techniques in vaginal cuff brachytherapy.
Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A
2009-11-01
The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.
Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P
2008-11-30
In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.
Energy landscapes for a machine learning application to series data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, Andrew J.; Stevenson, Jacob D.; Das, Ritankar
2016-03-28
Methods developed to explore and characterise potential energy landscapes are applied to the corresponding landscapes obtained from optimisation of a cost function in machine learning. We consider neural network predictions for the outcome of local geometry optimisation in a triatomic cluster, where four distinct local minima exist. The accuracy of the predictions is compared for fits using data from single and multiple points in the series of atomic configurations resulting from local geometry optimisation and for alternative neural networks. The machine learning solution landscapes are visualised using disconnectivity graphs, and signatures in the effective heat capacity are analysed in termsmore » of distributions of local minima and their properties.« less
Olivero, Sergio J Pérez; Trujillo, Juan P Pérez
2011-06-24
A new analytical method for the determination of nine short-chain fatty acids (acetic, propionic, isobutyric, butyric, isovaleric, 2-methylbutyric, hexanoic, octanoic and decanoic acids) in wines using the automated HS/SPME-GC-ITMS technique was developed and optimised. Five different SPME fibers were tested and the influence of different factors such as temperature and time of extraction, temperature and time of desorption, pH, strength ionic, tannins, anthocyans, SO(2), sugar and ethanol content were studied and optimised using model solutions. Some analytes showed matrix effect so a study of recoveries was performed. The proposed HS/SPME-GC-ITMS method, that covers the concentration range of the different analytes in wines, showed wide linear ranges, values of repeatability and reproducibility lower than 4.0% of RSD and detection limits between 3 and 257 μgL(-1), lower than the olfactory thresholds. The optimised method is a suitable technique for the quantitative analysis of short-chain fatty acids from the aliphatic series in real samples of white, rose and red wines. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian
2017-08-01
With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.
Setyaningsih, W; Saputro, I E; Palma, M; Barroso, C G
2015-02-15
A new microwave-assisted extraction (MAE) method has been investigated for the extraction of phenolic compounds from rice grains. The experimental conditions studied included temperature (125-175°C), microwave power (500-1000W), time (5-15min), solvent (10-90% EtOAc in MeOH) and solvent-to-sample ratio (10:1 to 20:1). The extraction variables were optimised by the response surface methodology. Extraction temperature and solvent were found to have a highly significant effect on the response value (p<0.0005) and the extraction time also had a significant effect (p<0.05). The optimised MAE conditions were as follows: extraction temperature 185°C, microwave power 1000W, extraction time 20min, solvent 100% MeOH, and solvent-to-sample ratio 10:1. The developed method had a high precision (in terms of CV: 5.3% for repeatability and 5.5% for intermediate precision). Finally, the new method was applied to real samples in order to investigate the presence of phenolic compounds in a wide variety of rice grains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model
NASA Astrophysics Data System (ADS)
Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu
2017-04-01
In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.
Optimisation of active suspension control inputs for improved performance of active safety systems
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2018-01-01
A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.
Analysis and optimisation of a mixed fluid cascade (MFC) process
NASA Astrophysics Data System (ADS)
Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng
2017-04-01
A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.
Xiaoli Sun; Wengang Li; Jian Li; Yuangang Zu; Chung-Yun Hse; Jiulong Xie; Xiuhua Zhao
2016-01-01
Ethanol and hexane mixture agent microwave-assisted extraction (MAE) method was conducted to extract peony (Paeonia suffruticosa Andr.) seed oil (PSO). The aim of the study was to optimise the extraction for both yield and energy consumption in mixture agent MAE. The highest oil yield (34.49%) and lowest unit energy consumption (14 125.4 J g -1)...
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis
Waterfall, Christy M.; Cobb, Benjamin D.
2001-01-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.
Waterfall, C M; Cobb, B D
2001-12-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.
Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian
2009-06-26
This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.
Giri, Anupam; Zelinkova, Zuzana; Wenzl, Thomas
2017-12-01
For the implementation of Regulation (EC) No 2065/2003 related to smoke flavourings used or intended for use in or on foods a method based on solid-phase micro extraction (SPME) GC/MS was developed for the characterisation of liquid smoke products. A statistically based experimental design (DoE) was used for method optimisation. The best general conditions to quantitatively analyse the liquid smoke compounds were obtained with a polydimethylsiloxane/divinylbenzene (PDMS/DVB) fibre, 60°C extraction temperature, 30 min extraction time, 250°C desorption temperature, 180 s desorption time, 15 s agitation time, and 250 rpm agitation speed. Under the optimised conditions, 119 wood pyrolysis products including furan/pyran derivatives, phenols, guaiacol, syringol, benzenediol, and their derivatives, cyclic ketones, and several other heterocyclic compounds were identified. The proposed method was repeatable (RSD% <5) and the calibration functions were linear for all compounds under study. Nine isotopically labelled internal standards were used for improving quantification of analytes by compensating matrix effects that might affect headspace equilibrium and extractability of compounds. The optimised isotope dilution SPME-GC/MS based analytical method proved to be fit for purpose, allowing the rapid identification and quantification of volatile compounds in liquid smoke flavourings.
Bauld, T; Teasdale, P; Stratton, H; Uwins, H
2007-01-01
The presence of unpleasant taste and odour in drinking water is an ongoing aesthetic concern for water providers worldwide. The need for a sensitive and robust method capable of analysis in both natural and treated waters is essential for early detection of taste and odour events. The purpose of this study was to develop and optimise a fast stir bar sorptive extraction (SBSE) method for the analysis of geosmin and 2-methylisoborneol (MIB) in both natural water and drinking water. Limits of detection with the optimised fast method (45 min extraction time at 60 degrees C using 24 microL stir bars) were 1.1 ng/L for geosmin and 4.2 ng/L for MIB. Relative standard deviations at the detection limits were under 17% for both compounds. Use of multiple stir bars can be used to decrease the detection limits further. The use of 25% NaCl and 5% methanol sample modifiers decreased the experimental recoveries. Likewise, addition of 1 mg/L and 1.5 mg/L NaOCI decreased the recoveries and this effect was not reversed by addition of 10% thiosulphate. The optimised method was used to measure geosmin concentrations in treated and untreated drinking water. MIB concentrations were below the detection limits in these waters.
Tunesi, Simonetta; Baroni, Sergio; Boarini, Sandro
2016-09-01
The results of this case study are used to argue that waste management planning should follow a detailed process, adequately confronting the complexity of the waste management problems and the specificity of each urban area and of regional/national situations. To support the development or completion of integrated waste management systems, this article proposes a planning method based on: (1) the detailed analysis of waste flows and (2) the application of a life cycle assessment to compare alternative scenarios and optimise solutions. The evolution of the City of Bologna waste management system is used to show how this approach can be applied to assess which elements improve environmental performance. The assessment of the contribution of each waste management phase in the Bologna integrated waste management system has proven that the changes applied from 2013 to 2017 result in a significant improvement of the environmental performance mainly as a consequence of the optimised integration between materials and energy recovery: Global Warming Potential at 100 years (GWP100) diminishes from 21,949 to -11,169 t CO2-eq y(-1) and abiotic resources depletion from -403 to -520 t antimony-eq. y(-1) This study analyses at great detail the collection phase. Outcomes provide specific operational recommendations to policy makers, showing the: (a) relevance of the choice of the materials forming the bags for 'door to door' collection (for non-recycled low-density polyethylene bags 22 kg CO2-eq (tonne of waste)(-1)); (b) relatively low environmental impacts associated with underground tanks (3.9 kg CO2-eq (tonne of waste)(-1)); (c) relatively low impact of big street containers with respect to plastic bags (2.6 kg CO2-eq. (tonne of waste)(-1)). © The Author(s) 2016.
Bock, I; Raveh-Amit, H; Losonczi, E; Carstea, A C; Feher, A; Mashayekhi, K; Matyas, S; Dinnyes, A; Pribenszky, C
2016-04-01
The efficiency of various assisted reproductive techniques can be improved by preconditioning the gametes and embryos with sublethal hydrostatic pressure treatment. However, the underlying molecular mechanism responsible for this protective effect remains unknown and requires further investigation. Here, we studied the effect of optimised hydrostatic pressure treatment on the global gene expression of mouse oocytes after embryonic genome activation. Based on a gene expression microarray analysis, a significant effect of treatment was observed in 4-cell embryos derived from treated oocytes, revealing a transcriptional footprint of hydrostatic pressure-affected genes. Functional analysis identified numerous genes involved in protein synthesis that were downregulated in 4-cell embryos in response to hydrostatic pressure treatment, suggesting that regulation of translation has a major role in optimised hydrostatic pressure-induced stress tolerance. We present a comprehensive microarray analysis and further delineate a potential mechanism responsible for the protective effect of hydrostatic pressure treatment.
Global preamplification simplifies targeted mRNA quantification
Kroneis, Thomas; Jonasson, Emma; Andersson, Daniel; Dolatabadi, Soheila; Ståhlberg, Anders
2017-01-01
The need to perform gene expression profiling using next generation sequencing and quantitative real-time PCR (qPCR) on small sample sizes and single cells is rapidly expanding. However, to analyse few molecules, preamplification is required. Here, we studied global and target-specific preamplification using 96 optimised qPCR assays. To evaluate the preamplification strategies, we monitored the reactions in real-time using SYBR Green I detection chemistry followed by melting curve analysis. Next, we compared yield and reproducibility of global preamplification to that of target-specific preamplification by qPCR using the same amount of total RNA. Global preamplification generated 9.3-fold lower yield and 1.6-fold lower reproducibility than target-specific preamplification. However, the performance of global preamplification is sufficient for most downstream applications and offers several advantages over target-specific preamplification. To demonstrate the potential of global preamplification we analysed the expression of 15 genes in 60 single cells. In conclusion, we show that global preamplification simplifies targeted gene expression profiling of small sample sizes by a flexible workflow. We outline the pros and cons for global preamplification compared to target-specific preamplification. PMID:28332609
Optimisation of warpage on thin shell part by using particle swarm optimisation (PSO)
NASA Astrophysics Data System (ADS)
Norshahira, R.; Shayfull, Z.; Nasir, S. M.; Saad, S. M. Sazli; Fathullah, M.
2017-09-01
As the product nowadays moving towards thinner design, causing the production of the plastic product facing a lot of difficulties. This is due to the higher possibilities of defects occur as the thickness of the wall gets thinner. Demand for technique in reducing the defects increasing due to this factor. These defects has seen to be occur due to several factors in injection moulding process. In the study a Moldflow software was used in simulating the injection moulding process. While RSM is used in producing the mathematical model to be used as the input fitness function for the Matlab software. Particle Swarm Optimisation (PSO) technique is used in optimising the processing condition to reduce the amount of shrinkage and warpage of the plastic part. The results shows that there are a warpage reduction of 17.60% in x direction, 18.15% in y direction and 10.25% reduction in z direction respectively. The results shows the reliability of this artificial method in minimising the product warpage.
Rani, K; Jahnen, A; Noel, A; Wolf, D
2015-07-01
In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
NASA Astrophysics Data System (ADS)
Rahmani, Kianoosh; Kavousifard, Farzaneh; Abbasi, Alireza
2017-09-01
This article proposes a novel probabilistic Distribution Feeder Reconfiguration (DFR) based method to consider the uncertainty impacts into account with high accuracy. In order to achieve the set aim, different scenarios are generated to demonstrate the degree of uncertainty in the investigated elements which are known as the active and reactive load consumption and the active power generation of the wind power units. Notably, a normal Probability Density Function (PDF) based on the desired accuracy is divided into several class intervals for each uncertain parameter. Besides, the Weiball PDF is utilised for modelling wind generators and taking the variation impacts of the power production in wind generators. The proposed problem is solved based on Fuzzy Adaptive Modified Particle Swarm Optimisation to find the most optimal switching scheme during the Multi-objective DFR. Moreover, this paper holds two suggestions known as new mutation methods to adjust the inertia weight of PSO by the fuzzy rules to enhance its ability in global searching within the entire search space.
An analysis of parameter sensitivities of preference-inspired co-evolutionary algorithms
NASA Astrophysics Data System (ADS)
Wang, Rui; Mansor, Maszatul M.; Purshouse, Robin C.; Fleming, Peter J.
2015-10-01
Many-objective optimisation problems remain challenging for many state-of-the-art multi-objective evolutionary algorithms. Preference-inspired co-evolutionary algorithms (PICEAs) which co-evolve the usual population of candidate solutions with a family of decision-maker preferences during the search have been demonstrated to be effective on such problems. However, it is unknown whether PICEAs are robust with respect to the parameter settings. This study aims to address this question. First, a global sensitivity analysis method - the Sobol' variance decomposition method - is employed to determine the relative importance of the parameters controlling the performance of PICEAs. Experimental results show that the performance of PICEAs is controlled for the most part by the number of function evaluations. Next, we investigate the effect of key parameters identified from the Sobol' test and the genetic operators employed in PICEAs. Experimental results show improved performance of the PICEAs as more preferences are co-evolved. Additionally, some suggestions for genetic operator settings are provided for non-expert users.
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
Semantic distance as a critical factor in icon design for in-car infotainment systems.
Silvennoinen, Johanna M; Kujala, Tuomo; Jokinen, Jussi P P
2017-11-01
In-car infotainment systems require icons that enable fluent cognitive information processing and safe interaction while driving. An important issue is how to find an optimised set of icons for different functions in terms of semantic distance. In an optimised icon set, every icon needs to be semantically as close as possible to the function it visually represents and semantically as far as possible from the other functions represented concurrently. In three experiments (N = 21 each), semantic distances of 19 icons to four menu functions were studied with preference rankings, verbal protocols, and the primed product comparisons method. The results show that the primed product comparisons method can be efficiently utilised for finding an optimised set of icons for time-critical applications out of a larger set of icons. The findings indicate the benefits of the novel methodological perspective into the icon design for safety-critical contexts in general. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.
García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M
2014-12-01
Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Systemic solutions for multi-benefit water and environmental management.
Everard, Mark; McInnes, Robert
2013-09-01
The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting
NASA Astrophysics Data System (ADS)
Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy
2018-06-01
Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.
Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark
2016-11-01
Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Optimisation of a double-centrifugation method for preparation of canine platelet-rich plasma.
Shin, Hyeok-Soo; Woo, Heung-Myong; Kang, Byung-Jae
2017-06-26
Platelet-rich plasma (PRP) has been expected for regenerative medicine because of its growth factors. However, there is considerable variability in the recovery and yield of platelets and the concentration of growth factors in PRP preparations. The aim of this study was to identify optimal relative centrifugal force and spin time for the preparation of PRP from canine blood using a double-centrifugation tube method. Whole blood samples were collected in citrate blood collection tubes from 12 healthy beagles. For the first centrifugation step, 10 different run conditions were compared to determine which condition produced optimal recovery of platelets. Once the optimal condition was identified, platelet-containing plasma prepared using that condition was subjected to a second centrifugation to pellet platelets. For the second centrifugation, 12 different run conditions were compared to identify the centrifugal force and spin time to produce maximal pellet recovery and concentration increase. Growth factor levels were estimated by using ELISA to measure platelet-derived growth factor-BB (PDGF-BB) concentrations in optimised CaCl 2 -activated platelet fractions. The highest platelet recovery rate and yield were obtained by first centrifuging whole blood at 1000 g for 5 min and then centrifuging the recovered platelet-enriched plasma at 1500 g for 15 min. This protocol recovered 80% of platelets from whole blood and increased platelet concentration six-fold and produced the highest concentration of PDGF-BB in activated fractions. We have described an optimised double-centrifugation tube method for the preparation of PRP from canine blood. This optimised method does not require particularly expensive equipment or high technical ability and can readily be carried out in a veterinary clinical setting.
Saint-Pierre, S
2012-01-01
Over the last few decades, the steady progress achieved in reducing planned exposures of both workers and the public has been admirable in the nuclear sector. However, the disproportionate focus on tiny public exposures and radioactive discharges associated with normal operations came at a high price, and the quasi-denial of a risk of major accident and related weaknesses in emergency preparedness and response came at an even higher price. Fukushima has unfortunately taught us that radiological protection (RP) for emergency and post-emergency situations can be much more than a simple evacuation that lasts 24-48 h, with people returning safely to their homes soon afterwards. On optimisation of emergency and post-emergency exposures, the only 'show in town' in terms of international RP policy improvements has been the issuance of the 2007 Recommendations of the International Commission on Radiological Protection (ICRP). However, no matter how genuine these improvements are, they have not been 'road tested' on the practical reality of severe accidents. Post-Fukushima, there is a compelling case to review the practical adequacy of key RP notions such as optimisation, evacuation, sheltering, and reference levels for workers and the public, and to amend these notions with a view to making the international RP system more useful in the event of a severe accident. On optimisation of planned exposures, the reality is that, nowadays, margins for further reductions of public doses in the nuclear sector are very small, and the smaller the dose, the greater the extra effort needed to reduce the dose further. If sufficient caution is not exercised in the use of RP notions such as dose constraints, there is a real risk of challenging nuclear power technologies beyond safety reasons. For nuclear new build, it is the optimisation of key operational parameters of nuclear power technologies (not RP) that is of paramount importance to improve their overall efficiency. In pursuing further improvements in the international RP system, it should be clearly borne in mind that the system is generally based on protection against the risk of cancer and hereditary diseases. The system also protects against deterministic non-cancer effects on tissues and organs. In seeking refinements of such protective notions, ICRP is invited to pay increased attention to the fact that a continued balance must be struck between beneficial activities that cause exposures and protection. The global nuclear industry is committed to help overcome these key RP issues as part of the RP community's upcoming international deliberations towards a more efficient international RP system. Copyright © 2012. Published by Elsevier Ltd.
Kalin, Robert M.; Hamilton, John T.G.; Harper, David B.; Miller, Laurence G.; Lamb, Clare; Kennedy, James T.; Downey, Angela; McCauley, Sean; Goldstein, Allen H.
2001-01-01
Gas chromatography/mass spectrometry/isotope ratio mass spectrometry (GC/MS/IRMS) methods for δ13C measurement of the halomethanes CH3Cl, CH3Br, CH3I and methanethiol (CH3SH) during studies of their biological production, biological degradation, and abiotic reactions are presented. Optimisation of gas chromatographic parameters allowed the identification and quantification of CO2, O2, CH3Cl, CH3Br, CH3I and CH3SH from a single sample, and also the concurrent measurement of δ13C for each of the halomethanes and methanethiol. Precision of δ13C measurements for halomethane standards decreased (±0.3, ±0.5 and ±1.3‰) with increasing mass (CH3Cl, CH3Br, CH3I, respectively). Given that carbon isotope effects during biological production, biological degradation and some chemical (abiotic) reactions can be as much as 100‰, stable isotope analysis offers a precise method to study the global sources and sinks of these halogenated compounds that are of considerable importance to our understanding of stratospheric ozone destruction.
Algorithme intelligent d'optimisation d'un design structurel de grande envergure
NASA Astrophysics Data System (ADS)
Dominique, Stephane
The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.
Omar, J; Boix, A; Kerckhove, G; von Holst, C
2016-12-01
Titanium dioxide (TiO 2 ) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO 2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CF exp ) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min -1 ; DF, 0.4 ml min -1 ; Ft, 5 min; and CF exp , 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated.
Omar, J.; Boix, A.; Kerckhove, G.; von Holst, C.
2016-01-01
ABSTRACT Titanium dioxide (TiO2) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CFexp) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min–1; DF, 0.4 ml min–1; Ft, 5 min; and CFexp, 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated. PMID:27650879
Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata
2015-03-15
Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.
Vázquez-Rowe, Ian; Iribarren, Diego
2015-01-01
Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.
Vázquez-Rowe, Ian
2015-01-01
Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting. PMID:25654136
Brown, Susan J; Selbie, W Scott; Wallace, Eric S
2013-01-01
A common biomechanical feature of a golf swing, described in various ways in the literature, is the interaction between the thorax and pelvis, often termed the X-Factor. There is no consistent method used within golf biomechanics literature however to calculate these segment interactions. The purpose of this study was to examine X-factor data calculated using three reported methods in order to determine the similarity or otherwise of the data calculated using each method. A twelve-camera three-dimensional motion capture system was used to capture the driver swings of 19 participants and a subject specific three-dimensional biomechanical model was created with the position and orientation of each model estimated using a global optimisation algorithm. Comparison of the X-Factor methods showed significant differences for events during the swing (P < 0.05). Data for each kinematic measure were derived as a times series for all three methods and regression analysis of these data showed that whilst one method could be successfully mapped to another, the mappings between methods are subject dependent (P <0.05). Findings suggest that a consistent methodology considering the X-Factor from a joint angle approach is most insightful in describing a golf swing.
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
Quantum chemical calculations of Cr2O3/SnO2 using density functional theory method
NASA Astrophysics Data System (ADS)
Jawaher, K. Rackesh; Indirajith, R.; Krishnan, S.; Robert, R.; Das, S. Jerome
2018-03-01
Quantum chemical calculations have been employed to study the molecular effects produced by Cr2O3/SnO2 optimised structure. The theoretical parameters of the transparent conducting metal oxides were calculated using DFT / B3LYP / LANL2DZ method. The optimised bond parameters such as bond lengths, bond angles and dihedral angles were calculated using the same theory. The non-linear optical property of the title compound was calculated using first-order hyperpolarisability calculation. The calculated HOMO-LUMO analysis explains the charge transfer interaction between the molecule. In addition, MEP and Mulliken atomic charges were also calculated and analysed.
NASA Astrophysics Data System (ADS)
Liou, Cheng-Dar
2015-09-01
This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.
Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios
2018-05-02
Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.
[The acute phase, a time which determines the outcome of a patient with a head trauma].
Jeauneaux, Olivier; Bony, Maylis; Giroud, Olivier; Chabert, Flavien; Pagnier, Daniel; Mansuy, Charlène; Quélin, Pauline; Lemperrière, Héloïse; Grodecœur, Caroline; Armonia, Cécile
2017-03-01
As soon as their prehospital care begins, patients with a serious head injury are given intensive care to offset the systemic failures observed and minimise secondary brain damage. In intensive care, monitoring is continuous and neuroprotection optimised. While the prognosis of the patient remains uncertain, their family are included and involved in their global care. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Savini, H; Maugey, N; Aletti, M; Facon, A; Koulibaly, F; Cotte, J; Janvier, F; Cordier, P Y; Dampierre, H; Ramade, S; Foissaud, V; Granier, H; Sagui, E; Carmoi, T
2016-10-01
The Healthcare Workers Treatment Center of Conakry, Guinea, was inaugurated in january 2015. It is dedicated to the diagnosis and the treatment of healthcare workers with probable or confirmed Ebola viral disease. It is staffed by the french army medical service. The french military team may reconcile their medical practice and the ethno-cultural imperatives to optimise the patient adherence during his hospitalization.
Lewczuk, Piotr; Riederer, Peter; O'Bryant, Sid E; Verbeek, Marcel M; Dubois, Bruno; Visser, Pieter Jelle; Jellinger, Kurt A; Engelborghs, Sebastiaan; Ramirez, Alfredo; Parnetti, Lucilla; Jack, Clifford R; Teunissen, Charlotte E; Hampel, Harald; Lleó, Alberto; Jessen, Frank; Glodzik, Lidia; de Leon, Mony J; Fagan, Anne M; Molinuevo, José Luis; Jansen, Willemijn J; Winblad, Bengt; Shaw, Leslie M; Andreasson, Ulf; Otto, Markus; Mollenhauer, Brit; Wiltfang, Jens; Turner, Martin R; Zerr, Inga; Handels, Ron; Thompson, Alexander G; Johansson, Gunilla; Ermann, Natalia; Trojanowski, John Q; Karaca, Ilker; Wagner, Holger; Oeckl, Patrick; van Waalwijk van Doorn, Linda; Bjerke, Maria; Kapogiannis, Dimitrios; Kuiperij, H Bea; Farotti, Lucia; Li, Yi; Gordon, Brian A; Epelbaum, Stéphane; Vos, Stephanie J B; Klijn, Catharina J M; Van Nostrand, William E; Minguillon, Carolina; Schmitz, Matthias; Gallo, Carla; Lopez Mato, Andrea; Thibaut, Florence; Lista, Simone; Alcolea, Daniel; Zetterberg, Henrik; Blennow, Kaj; Kornhuber, Johannes
2018-06-01
In the 12 years since the publication of the first Consensus Paper of the WFSBP on biomarkers of neurodegenerative dementias, enormous advancement has taken place in the field, and the Task Force takes now the opportunity to extend and update the original paper. New concepts of Alzheimer's disease (AD) and the conceptual interactions between AD and dementia due to AD were developed, resulting in two sets for diagnostic/research criteria. Procedures for pre-analytical sample handling, biobanking, analyses and post-analytical interpretation of the results were intensively studied and optimised. A global quality control project was introduced to evaluate and monitor the inter-centre variability in measurements with the goal of harmonisation of results. Contexts of use and how to approach candidate biomarkers in biological specimens other than cerebrospinal fluid (CSF), e.g. blood, were precisely defined. Important development was achieved in neuroimaging techniques, including studies comparing amyloid-β positron emission tomography results to fluid-based modalities. Similarly, development in research laboratory technologies, such as ultra-sensitive methods, raises our hopes to further improve analytical and diagnostic accuracy of classic and novel candidate biomarkers. Synergistically, advancement in clinical trials of anti-dementia therapies energises and motivates the efforts to find and optimise the most reliable early diagnostic modalities. Finally, the first studies were published addressing the potential of cost-effectiveness of the biomarkers-based diagnosis of neurodegenerative disorders.
Martens, Pim; Akin, Su-Mia; Maud, Huynen; Mohsin, Raza
2010-09-17
It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all.
Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health
2010-01-01
It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all. PMID:20849605
NASA Astrophysics Data System (ADS)
Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson
2018-06-01
Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.
3D printed fluidics with embedded analytic functionality for automated reaction optimisation
Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D
2017-01-01
Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852
NASA Astrophysics Data System (ADS)
Aungkulanon, P.; Luangpaiboon, P.
2010-10-01
Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.
Sun, Jingcan; Yu, Bin; Curran, Philip; Liu, Shao-Quan
2012-12-15
Coconut cream and fusel oil, two low-cost natural substances, were used as starting materials for the biosynthesis of flavour-active octanoic acid esters (ethyl-, butyl-, isobutyl- and (iso)amyl octanoate) using lipase Palatase as the biocatalyst. The Taguchi design method was used for the first time to optimize the biosynthesis of esters by a lipase in an aqueous system of coconut cream and fusel oil. Temperature, time and enzyme amount were found to be statistically significant factors and the optimal conditions were determined to be as follows: temperature 30°C, fusel oil concentration 9% (v/w), reaction time 24h, pH 6.2 and enzyme amount 0.26 g. Under the optimised conditions, a yield of 14.25mg/g (based on cream weight) and signal-to-noise (S/N) ratio of 23.07 dB were obtained. The results indicate that the Taguchi design method was an efficient and systematic approach to the optimisation of lipase-catalysed biological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Moss and peat hydraulic properties are optimized to maximise peatland water use efficiency
NASA Astrophysics Data System (ADS)
Kettridge, Nicholas; Tilak, Amey; Devito, Kevin; Petrone, Rich; Mendoza, Carl; Waddington, Mike
2016-04-01
Peatland ecosystems are globally important carbon and terrestrial surface water stores that have formed over millennia. These ecosystems have likely optimised their ecohydrological function over the long-term development of their soil hydraulic properties. Through a theoretical ecosystem approach, applying hydrological modelling integrated with known ecological thresholds and concepts, the optimisation of peat hydraulic properties is examined to determine which of the following conditions peatland ecosystems target during this development: i) maximise carbon accumulation, ii) maximise water storage, or iii) balance carbon profit across hydrological disturbances. Saturated hydraulic conductivity (Ks) and empirical van Genuchten water retention parameter α are shown to provide a first order control on simulated water tensions. Across parameter space, peat profiles with hypothetical combinations of Ks and α show a strong binary tendency towards targeting either water or carbon storage. Actual hydraulic properties from five northern peatlands fall at the interface between these goals, balancing the competing demands of carbon accumulation and water storage. We argue that peat hydraulic properties are thus optimized to maximise water use efficiency and that this optimisation occurs over a centennial to millennial timescale as the peatland develops. This provides a new conceptual framework to characterise peat hydraulic properties across climate zones and between a range of different disturbances, and which can be used to provide benchmarks for peatland design and reclamation.
NASA Astrophysics Data System (ADS)
Milic, Vladimir; Kasac, Josip; Novakovic, Branko
2015-10-01
This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.
Optimal control of LQR for discrete time-varying systems with input delays
NASA Astrophysics Data System (ADS)
Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng
2018-04-01
In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozhdestvensky, Yu V
The possibility is studied for obtaining intense cold atomic beams by using the Renyi entropy to optimise the laser cooling process. It is shown in the case of a Gaussian velocity distribution of atoms, the Renyi entropy coincides with the density of particles in the phase space. The optimisation procedure for cooling atoms by resonance optical radiation is described, which is based on the thermodynamic law of increasing the Renyi entropy in time. Our method is compared with the known methods for increasing the laser cooling efficiency such as the tuning of a laser frequency in time and a changemore » of the atomic transition frequency in an inhomogeneous transverse field of a magnetic solenoid. (laser cooling)« less
Optimal control of Formula One car energy recovery systems
NASA Astrophysics Data System (ADS)
Limebeer, D. J. N.; Perantoni, G.; Rao, A. V.
2014-10-01
The utility of orthogonal collocation methods in the solution of optimal control problems relating to Formula One racing is demonstrated. These methods can be used to optimise driver controls such as the steering, braking and throttle usage, and to optimise vehicle parameters such as the aerodynamic down force and mass distributions. Of particular interest is the optimal usage of energy recovery systems (ERSs). Contemporary kinetic energy recovery systems are studied and compared with future hybrid kinetic and thermal/heat ERSs known as ERS-K and ERS-H, respectively. It is demonstrated that these systems, when properly controlled, can produce contemporary lap time using approximately two-thirds of the fuel required by earlier generation (2013 and prior) vehicles.
A centre-free approach for resource allocation with lower bounds
NASA Astrophysics Data System (ADS)
Obando, Germán; Quijano, Nicanor; Rakoto-Ravalontsalama, Naly
2017-09-01
Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.
NASA Astrophysics Data System (ADS)
Fu, Shihua; Li, Haitao; Zhao, Guodong
2018-05-01
This paper investigates the evolutionary dynamic and strategy optimisation for a kind of networked evolutionary games whose strategy updating rules incorporate 'bankruptcy' mechanism, and the situation that each player's bankruptcy is due to the previous continuous low profits gaining from the game is considered. First, by using semi-tensor product of matrices method, the evolutionary dynamic of this kind of games is expressed as a higher order logical dynamic system and then converted into its algebraic form, based on which, the evolutionary dynamic of the given games can be discussed. Second, the strategy optimisation problem is investigated, and some free-type control sequences are designed to maximise the total payoff of the whole game. Finally, an illustrative example is given to show that our new results are very effective.
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
Bisele, Maria; Bencsik, Martin; Lewis, Martin G C; Barnett, Cleveland T
2017-01-01
Assessment methods in human locomotion often involve the description of normalised graphical profiles and/or the extraction of discrete variables. Whilst useful, these approaches may not represent the full complexity of gait data. Multivariate statistical methods, such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA), have been adopted since they have the potential to overcome these data handling issues. The aim of the current study was to develop and optimise a specific machine learning algorithm for processing human locomotion data. Twenty participants ran at a self-selected speed across a 15m runway in barefoot and shod conditions. Ground reaction forces (BW) and kinematics were measured at 1000 Hz and 100 Hz, respectively from which joint angles (°), joint moments (N.m.kg-1) and joint powers (W.kg-1) for the hip, knee and ankle joints were calculated in all three anatomical planes. Using PCA and DFA, power spectra of the kinematic and kinetic variables were used as a training database for the development of a machine learning algorithm. All possible combinations of 10 out of 20 participants were explored to find the iteration of individuals that would optimise the machine learning algorithm. The results showed that the algorithm was able to successfully predict whether a participant ran shod or barefoot in 93.5% of cases. To the authors' knowledge, this is the first study to optimise the development of a machine learning algorithm.
Bisele, Maria; Bencsik, Martin; Lewis, Martin G. C.
2017-01-01
Assessment methods in human locomotion often involve the description of normalised graphical profiles and/or the extraction of discrete variables. Whilst useful, these approaches may not represent the full complexity of gait data. Multivariate statistical methods, such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA), have been adopted since they have the potential to overcome these data handling issues. The aim of the current study was to develop and optimise a specific machine learning algorithm for processing human locomotion data. Twenty participants ran at a self-selected speed across a 15m runway in barefoot and shod conditions. Ground reaction forces (BW) and kinematics were measured at 1000 Hz and 100 Hz, respectively from which joint angles (°), joint moments (N.m.kg-1) and joint powers (W.kg-1) for the hip, knee and ankle joints were calculated in all three anatomical planes. Using PCA and DFA, power spectra of the kinematic and kinetic variables were used as a training database for the development of a machine learning algorithm. All possible combinations of 10 out of 20 participants were explored to find the iteration of individuals that would optimise the machine learning algorithm. The results showed that the algorithm was able to successfully predict whether a participant ran shod or barefoot in 93.5% of cases. To the authors’ knowledge, this is the first study to optimise the development of a machine learning algorithm. PMID:28886059
NASA Astrophysics Data System (ADS)
Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel
2015-12-01
We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
Pardo, O; Yusà, V; Coscollà, C; León, N; Pastor, A
2007-07-01
A selective and sensitive procedure has been developed and validated for the determination of acrylamide in difficult matrices, such as coffee and chocolate. The proposed method includes pressurised fluid extraction (PFE) with acetonitrile, florisil clean-up purification inside the PFE extraction cell and detection by liquid chromatography (LC) coupled to atmospheric pressure ionisation in positive mode tandem mass spectrometry (APCI-MS-MS). Comparison of ionisation sources (atmospheric pressure chemical ionisation (APCI), atmospheric pressure photoionization (APPI) and the combined APCI/APPI) and clean-up procedures were carried out to improve the analytical signal. The main parameters affecting the performance of the different ionisation sources were previously optimised using statistical design of experiments (DOE). PFE parameters were also optimised by DOE. For quantitation, an isotope dilution approach was used. The limit of quantification (LOQ) of the method was 1 microg kg(-1) for coffee and 0.6 microg kg(-1) for chocolate. Recoveries ranged between 81-105% in coffee and 87-102% in chocolate. The accuracy was evaluated using a coffee reference test material FAPAS T3008. Using the optimised method, 20 coffee and 15 chocolate samples collected from Valencian (Spain) supermarkets, were investigated for acrylamide, yielding median levels of 146 microg kg(-1) in coffee and 102 microg kg(-1) in chocolate.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Chou, Ping-Yi; Chou, Jyh-Horng
2015-11-01
The aim of this study is to generate vector quantisation (VQ) codebooks by integrating principle component analysis (PCA) algorithm, Linde-Buzo-Gray (LBG) algorithm, and evolutionary algorithms (EAs). The EAs include genetic algorithm (GA), particle swarm optimisation (PSO), honey bee mating optimisation (HBMO), and firefly algorithm (FF). The study is to provide performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches. The PCA-EA-LBG approaches contain PCA-GA-LBG, PCA-PSO-LBG, PCA-HBMO-LBG, and PCA-FF-LBG, while the PCA-LBG-EA approaches contain PCA-LBG, PCA-LBG-GA, PCA-LBG-PSO, PCA-LBG-HBMO, and PCA-LBG-FF. All training vectors of test images are grouped according to PCA. The PCA-EA-LBG used the vectors grouped by PCA as initial individuals, and the best solution gained by the EAs was given for LBG to discover a codebook. The PCA-LBG approach is to use the PCA to select vectors as initial individuals for LBG to find a codebook. The PCA-LBG-EA used the final result of PCA-LBG as an initial individual for EAs to find a codebook. The search schemes in PCA-EA-LBG first used global search and then applied local search skill, while in PCA-LBG-EA first used local search and then employed global search skill. The results verify that the PCA-EA-LBG indeed gain superior results compared to the PCA-LBG-EA, because the PCA-EA-LBG explores a global area to find a solution, and then exploits a better one from the local area of the solution. Furthermore the proposed PCA-EA-LBG approaches in designing VQ codebooks outperform existing approaches shown in the literature.
Perestrelo, Rosa; Barros, António S; Rocha, Sílvia M; Câmara, José S
2011-09-15
The volatiles (VOCs) and semi-volatile organic compounds (SVOCs) responsible for aroma are mainly present in skin of grape varieties. Thus, the present investigation is directed towards the optimisation of a solvent free methodology based on headspace-solid-phase microextraction (HS-SPME) combined with gas chromatography-quadrupole mass spectrometry (GC-qMS) in order to establish the global volatile composition in pulp and skin of Bual and Bastardo Vitis vinifera L. varieties. A deep study on the extraction-influencing parameters was performed, and the best results, expressed as GC peak area, number of identified compounds and reproducibility, were obtained using 4 g of sample homogenised in 5 mL of ultra-pure Milli-Q water in a 20 mL glass vial with addition of 2g of sodium chloride (NaCl). A divinylbenzene/carboxen/polydimethylsiloxane fibre was selected for extraction at 60°C for 45 min under continuous stirring at 800 rpm. More than 100 VOCs and SVOCs, including 27 monoterpenoids, 27 sesquiterpenoids, 21 carbonyl compounds, 17 alcohols (from which 2 aromatics), 10 C(13) norisoprenoids and 5 acids were identified. The results showed that, for both grape varieties, the levels and number of volatiles in skin were considerably higher than those observed in pulp. According to the data obtained by principal component analysis (PCA), the establishment of the global volatile signature of grape and the relationship between different part of grapes-pulp and skin, may be an useful tool to winemaker decision to define the vinification procedures that improves the organoleptic characteristics of the corresponding wines and consequently contributed to an economic valorization and consumer acceptance. Copyright © 2011 Elsevier B.V. All rights reserved.
3D Reconstruction of human bones based on dictionary learning.
Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin
2017-11-01
An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Assessing the global reach and value of a provider-facing healthcare app using large-scale analytics
Easton, George; Gillespie, Scott
2017-01-01
Background The rapid global adoption of mobile health (mHealth) smartphone apps by healthcare providers presents challenges and opportunities in medicine. Challenges include ensuring the delivery of high-quality, up-to-date and optimised information. Opportunities include the ability to study global practice patterns, access to medical and surgical care and continuing medical education needs. Methods We studied users of a free anaesthesia calculator app used worldwide. We combined traditional app analytics with in-app surveys to collect user demographics and feedback. Results 31 173 subjects participated. Users were from 206 countries and represented a spectrum of healthcare provider roles. Low-income country users had greater rates of app use (p<0.001) and ascribed greater importance of the app to their practice (p<0.001). Physicians from low-income countries were more likely to adopt the app (p<0.001). The app was used primarily for paediatric patients. The app was used around the clock, peaking during times typical for first start cases. Conclusions This mHealth app is a valuable decision support tool for global healthcare providers, particularly those in more resource-limited settings and with less training. App adoption and use may provide a mechanism for measuring longitudinal changes in access to surgical care and engaging providers in resource-limited settings. In-app surveys and app analytics provide a window into healthcare provider behaviour at a breadth and level of detail previously impossible to achieve. Given the potentially immense value of crowdsourced information, healthcare providers should be encouraged to participate in these types of studies. PMID:29082007
Warpage analysis on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
The optimisation of moulding parameters appropriate to reduce warpage defects produce using Autodesk Moldflow Insight (AMI) 2012 software The product is injected by using Acrylonitrile-Butadiene-Styrene (ABS) materials. This analysis has processing parameter that varies in melting temperature, mould temperature, packing pressure and packing time. Design of Experiments (DOE) has been integrated to obtain a polynomial model using Response Surface Methodology (RSM). The Glowworm Swarm Optimisation (GSO) method is used to predict a best combination parameters to minimise warpage defect in order to produce high quality parts.
Harms and benefits from social imitation
NASA Astrophysics Data System (ADS)
Slanina, František
2001-10-01
We study the role of imitation within a model of economics with adaptive agents. The basic ingredients are those of the minority game. We add the possibility of local information exchange and imitation of the neighbour's strategy. Imitators should pay a fee to the imitated. Connected groups are formed, which act as if they were single players. Coherent spatial areas of rich and poor agents result, leading to the decrease of local social tensions. Size and stability of these areas depends on the parameters of the model. Global performance measured by the attendance volatility is optimised at certain value of the imitation probability. The social tensions are suppressed for large imitation probability, but due to the price paid by the imitators the requirements of high global effectivity and low social tensions are in conflict, as well as the requirements of low global and low local wealth differences.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan
2017-08-01
Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
NASA Astrophysics Data System (ADS)
Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei
2015-10-01
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
On the analysis of using 3-coil wireless power transfer system in retinal prosthesis.
Bai, Shun; Skafidas, Stan
2014-01-01
Designing a wireless power transmission system(WPTS) using inductive coupling has been investigated extensively in the last decade. Depending on the different configurations of the coupling system, there have been various designing methods to optimise the power transmission efficiency based on the tuning circuitry, quality factor optimisation and geometrical configuration. Recently, a 3-coil WPTS was introduced in retinal prosthesis to overcome the low power transferring efficiency due to low coupling coefficient. Here we present a method to analyse this 3-coil WPTS using the S-parameters to directly obtain maximum achievable power transferring efficiency. Through electromagnetic simulation, we brought a question on the condition of improvement using 3-coil WPTS in powering retinal prosthesis.
Gigliarelli, Giulia; Pagiotti, Rita; Persia, Diana; Marcotullio, Maria Carla
2017-01-01
Studies were made to increase the yield of piperine extraction using Naviglio Extractor® solid-liquid dynamic extractor (SLDE) from fruits of Piper longum. The effects of ratio w/v were investigated and optimised for the best method. The maximum yield of piperine (317.7 mg/g) from P. longum fruits was obtained in SLDE 1:50 ethanol extract. Extraction yields of piperine obtained from Soxhlet extraction, decotion (International Organization for Standardization) and conventional maceration extraction methods were found to be 233.7, 231.8 and 143.6 mg/g, respectively. The results of the present study indicated that Naviglio Extractor® is an effective technique for the extraction of piperine from long pepper.
Optimisation of solar synoptic observations
NASA Astrophysics Data System (ADS)
Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal
2012-09-01
The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.
Game Methodology for Design Methods and Tools Selection
ERIC Educational Resources Information Center
Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois
2014-01-01
Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230
Illias, Hazlee Azil; Zhao Liang, Wee
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.
Optimisation of active suspension control inputs for improved vehicle ride performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2016-07-01
A collocation-type control variable optimisation method is used in the paper to analyse to which extent the fully active suspension (FAS) can improve the vehicle ride comfort while preserving the wheel holding ability. The method is first applied for a cosine-shaped bump road disturbance of different heights, and for both quarter-car and full 10 degree-of-freedom vehicle models. A nonlinear anti-wheel hop constraint is considered, and the influence of bump preview time period is analysed. The analysis is then extended to the case of square- or cosine-shaped pothole with different lengths, and the quarter-car model. In this case, the cost function is extended with FAS energy consumption and wheel damage resilience costs. The FAS action is found to be such to provide a wheel hop over the pothole, in order to avoid or minimise the damage at the pothole trailing edge. In the case of long pothole, when the FAS cannot provide the wheel hop, the wheel is travelling over the pothole bottom and then hops over the pothole trailing edge. The numerical optimisation results are accompanied by a simplified algebraic analysis.
NASA Astrophysics Data System (ADS)
Muller, Wayne; Scheuermann, Alexander
2016-04-01
Measuring the electrical permittivity of civil engineering materials is important for a range of ground penetrating radar (GPR) and pavement moisture measurement applications. Compacted unbound granular (UBG) pavement materials present a number of preparation and measurement challenges using conventional characterisation techniques. As an alternative to these methods, a modified free-space (MFS) characterisation approach has previously been investigated. This paper describes recent work to optimise and validate the MFS technique. The research included finite difference time domain (FDTD) modelling to better understand the nature of wave propagation within material samples and the test apparatus. This research led to improvements in the test approach and optimisation of sample sizes. The influence of antenna spacing and sample thickness on the permittivity results was investigated by a series of experiments separating antennas and measuring samples of nylon and water. Permittivity measurements of samples of nylon and water approximately 100 mm and 170 mm thick were also compared, showing consistent results. These measurements also agreed well with surface probe measurements of the nylon sample and literature values for water. The results indicate permittivity estimates of acceptable accuracy can be obtained using the proposed approach, apparatus and sample sizes.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
NASA Astrophysics Data System (ADS)
Hadade, Ioan; di Mare, Luca
2016-08-01
Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
NASA Astrophysics Data System (ADS)
Katata, Lebogang; Tshweu, Lesego; Naidoo, Saloshnee; Kalombo, Lonji; Swai, Hulda
2012-11-01
Efavirenz (EFV) is one of the first-line antiretroviral drugs recommended by the World Health Organisation for treating HIV. It is a hydrophobic drug that suffers from low aqueous solubility (4 μg/mL), which leads to a limited oral absorption and low bioavailability. In order to improve its oral bioavailability, nano-sized polymeric delivery systems are suggested. Spray dried polycaprolactone-efavirenz (PCL-EFV) nanoparticles were prepared by the double emulsion method. The Taguchi method, a statistical design with an L8 orthogonal array, was implemented to optimise the formulation parameters of PCL-EFV nanoparticles. The types of sugar (lactose or trehalose), surfactant concentration and solvent (dichloromethane and ethyl acetate) were chosen as significant parameters affecting the particle size and polydispersity index (PDI). Small nanoparticles with an average particle size of less than 254 ± 0.95 nm in the case of ethyl acetate as organic solvent were obtained as compared to more than 360 ± 19.96 nm for dichloromethane. In this study, the type of solvent and sugar were the most influencing parameters of the particle size and PDI. Taguchi method proved to be a quick, valuable tool in optimising the particle size and PDI of PCL-EFV nanoparticles. The optimised experimental values for the nanoparticle size and PDI were 217 ± 2.48 nm and 0.093 ± 0.02.
Development of a competency framework for the nutrition in emergencies sector.
Meeker, Jessica; Perry, Abigail; Dolan, Carmel; Emary, Colleen; Golden, Kate; Abla, Caroline; Walsh, Anne; Maclaine, Ali; Seal, Andrew
2014-03-01
There is a recognised need to strengthen capacity in the nutrition in emergencies sector and for greater clarity on the role of emergency nutritionists and the skills they require. Competency frameworks are an important tool for human resource development and have been developed for several other humanitarian sectors. We therefore developed a technical competency framework for practitioners in nutrition in emergencies. Existing competency frameworks were reviewed and interviews conducted to explore methods used in developing competency frameworks for other sectors. Competencies were identified through interviews with field experts, feedback from course trainees, academic course content and job specifications. Competencies were then categorised and behavioural indicators developed for each. The draft framework was then reviewed by members of the Global Nutrition Cluster and modified in an iterative process. Global. Not applicable. A wide range of competencies were identified as essential for nutritionists working in emergencies, covering technical skills and general core competencies. The proposed framework contains twenty competency areas with 161 behavioural indicators categorised into three levels, corresponding to the requirements of progressively more senior roles. Many of the competencies are common across development and emergency nutrition. The proposed technical competency framework should prove to be a valuable tool in creating standards within the sector and promoting effective capacity strengthening and professionalisation. Continued research is needed to validate the framework, optimise methods for assessment, develop approaches to integrate it within the sector and measure its impact on performance.
Oliveri, Paolo
2017-08-22
Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Liu, Qiang; Wang, Chengen
2012-08-01
Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.
Lataoui, Mohammed; Seffen, Mongi; Aliakbarian, Bahar; Casazza, Alessandro Alberto; Converti, Attilio; Perego, Patrizia
2014-01-01
To optimise recovery of phenolics from Vitex agnus-castus Linn., a non-conventional high-pressure (2-24 bar) and temperature (100-180°C) extraction method was used under nitrogen atmosphere with methanol as a solvent. Optimal temperature was between 100 and 140°C, and optimal extraction time was about one half that of conventional solid/liquid extraction at room temperature. Final yields of total polyphenols, total flavonoids, o-diphenols and anthocyanins extraction were 2.0, 3.0, 2.5 and 11-fold those obtained by conventional extraction.
Global impact of accelerated plant breeding: Evidence from a meta-analysis on rice breeding.
Lenaerts, Bert; de Mey, Yann; Demont, Matty
2018-01-01
Rice breeders in Asia and elsewhere in the world have long overlooked trying to shorten the time it takes to develop new varieties. Plant breeders have proposed a technique called Rapid Generation Advance (RGA) as a way to accelerate the results of public rice breeding programs. However, little is known about RGA's potential impact. Here, we present the first results of a global impact study of RGA. More specifically, we calculated the multiplicator effects of RGA on the research benefits generated by conventional rice breeding programs and applied them to a meta-analysis of selected impact studies in the literature. These insights are a first crucial step in developing a targeted approach for disseminating RGA technology among rice breeders to accelerate the impact of their public rice breeding programs around the world. We show that the additional benefits due to time savings are considerable and offer some insights into the economics of breeding. Our results confirm that the adoption of accelerated breeding would lead to substantial advantages to rice breeding programs and the earlier variety release leads to significant economic benefits to society. This can be important to policy makers when reshaping their public breeding methods and optimising their return on research investments in breeding.
Laurent, Cédric P; Latil, Pierre; Durville, Damien; Rahouadj, Rachid; Geindreau, Christian; Orgéas, Laurent; Ganghoffer, Jean-François
2014-12-01
The use of biodegradable scaffolds seeded with cells in order to regenerate functional tissue-engineered substitutes offers interesting alternative to common medical approaches for ligament repair. Particularly, finite element (FE) method enables the ability to predict and optimise both the macroscopic behaviour of these scaffolds and the local mechanic signals that control the cell activity. In this study, we investigate the ability of a dedicated FE code to predict the geometrical evolution of a new braided and biodegradable polymer scaffold for ligament tissue engineering by comparing scaffold geometries issued from FE simulations and from X-ray tomographic imaging during a tensile test. Moreover, we compare two types of FE simulations the initial geometries of which are issued either from X-ray imaging or from a computed idealised configuration. We report that the dedicated FE simulations from an idealised reference configuration can be reasonably used in the future to predict the global and local mechanical behaviour of the braided scaffold. A valuable and original dialog between the fields of experimental and numerical characterisation of such fibrous media is thus achieved. In the future, this approach should enable to improve accurate characterisation of local and global behaviour of tissue-engineering scaffolds. Copyright © 2014 Elsevier Ltd. All rights reserved.
On the performances of different IMRT treatment planning systems for selected paediatric cases
Fogliata, Antonella; Nicolini, Giorgia; Alber, Markus; Åsell, Mats; Clivio, Alessandro; Dobler, Barbara; Larsson, Malin; Lohr, Frank; Lorenz, Friedlieb; Muzik, Jan; Polednik, Martin; Vanetti, Eugenio; Wolff, Dirk; Wyttenbach, Rolf; Cozzi, Luca
2007-01-01
Background To evaluate the performance of seven different TPS (Treatment Planning Systems: Corvus, Eclipse, Hyperion, KonRad, Oncentra Masterplan, Pinnacle and PrecisePLAN) when intensity modulated (IMRT) plans are designed for paediatric tumours. Methods Datasets (CT images and volumes of interest) of four patients were used to design IMRT plans. The tumour types were: one extraosseous, intrathoracic Ewing Sarcoma; one mediastinal Rhabdomyosarcoma; one metastatic Rhabdomyosarcoma of the anus; one Wilm's tumour of the left kidney with multiple liver metastases. Prescribed doses ranged from 18 to 54.4 Gy. To minimise variability, the same beam geometry and clinical goals were imposed on all systems for every patient. Results were analysed in terms of dose distributions and dose volume histograms. Results For all patients, IMRT plans lead to acceptable treatments in terms of conformal avoidance since most of the dose objectives for Organs At Risk (OARs) were met, and the Conformity Index (averaged over all TPS and patients) ranged from 1.14 to 1.58 on primary target volumes and from 1.07 to 1.37 on boost volumes. The healthy tissue involvement was measured in terms of several parameters, and the average mean dose ranged from 4.6 to 13.7 Gy. A global scoring method was developed to evaluate plans according to their degree of success in meeting dose objectives (lower scores are better than higher ones). For OARs the range of scores was between 0.75 ± 0.15 (Eclipse) to 0.92 ± 0.18 (Pinnacle3 with physical optimisation). For target volumes, the score ranged from 0.05 ± 0.05 (Pinnacle3 with physical optimisation) to 0.16 ± 0.07 (Corvus). Conclusion A set of complex paediatric cases presented a variety of individual treatment planning challenges. Despite the large spread of results, inverse planning systems offer promising results for IMRT delivery, hence widening the treatment strategies for this very sensitive class of patients. PMID:17302972
NASA Astrophysics Data System (ADS)
Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.
2017-09-01
In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
Staples, Emily; Ingram, Richard James Michael; Atherton, John Christopher; Robinson, Karen
2013-01-01
Sensitive measurement of multiple cytokine profiles from small mucosal tissue biopsies, for example human gastric biopsies obtained through an endoscope, is technically challenging. Multiplex methods such as Luminex assays offer an attractive solution but standard protocols are not available for tissue samples. We assessed the utility of three commercial Luminex kits (VersaMAP, Bio-Plex and MILLIPLEX) to measure interleukin-17A (IL-17) and interferon-gamma (IFNγ) concentrations in human gastric biopsies and we optimised preparation of mucosal samples for this application. First, we assessed the technical performance, limits of sensitivity and linear dynamic ranges for each kit. Next we spiked human gastric biopsies with recombinant IL-17 and IFNγ at a range of concentrations (1.5 to 1000 pg/mL) and assessed kit accuracy for spiked cytokine recovery and intra-assay precision. We also evaluated the impact of different tissue processing methods and extraction buffers on our results. Finally we assessed recovery of endogenous cytokines in unspiked samples. In terms of sensitivity, all of the kits performed well within the manufacturers' recommended standard curve ranges but the MILLIPLEX kit provided most consistent sensitivity for low cytokine concentrations. In the spiking experiments, the MILLIPLEX kit performed most consistently over the widest range of concentrations. For tissue processing, manual disruption provided significantly improved cytokine recovery over automated methods. Our selected kit and optimised protocol were further validated by measurement of relative cytokine levels in inflamed and uninflamed gastric mucosa using Luminex and real-time polymerase chain reaction. In summary, with proper optimisation Luminex kits (and for IL-17 and IFNγ the MILLIPLEX kit in particular) can be used for the sensitive detection of cytokines in mucosal biopsies. Our results should help other researchers seeking to quantify multiple low concentration cytokines in small tissue samples. PMID:23644159
A density functional global optimisation study of neutral 8-atom Cu-Ag and Cu-Au clusters
NASA Astrophysics Data System (ADS)
Heard, Christopher J.; Johnston, Roy L.
2013-02-01
The effect of doping on the energetics and dimensionality of eight atom coinage metal subnanometre particles is fully resolved using a genetic algorithm in tandem with on the fly density functional theory calculations to determine the global minima (GM) for Cu n Ag(8- n) and Cu n Au(8- n) clusters. Comparisons are made to previous ab initio work on mono- and bimetallic clusters, with excellent agreement found. Charge transfer and geometric arguments are considered to rationalise the stability of the particular permutational isomers found. An interesting transition between three dimensional and two dimensional GM structures is observed for copper-gold clusters, which is sharper and appears earlier in the doping series than is known for gold-silver particles.
Dolan, Gerry
2017-10-01
The 7th Haemophilia Global Summit was held in Madrid, Spain, in September 2016. With a programme designed, for the 6th consecutive year, by a Scientific Steering Committee of haemophilia experts, the aim of the summit was to share optimal management strategies for haemophilia at all life stages and to provide an opportunity for specialists from across the haemophilia multidisciplinary care team to engage in discussion and debate with leading international experts on current and future areas of research. Topics covered ranged from the optimisation of haemophilia management, emerging issues in clinical care, practical approaches and future perspectives, in addition to patient engagement and empowerment in modern haemophilia care. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution
NASA Astrophysics Data System (ADS)
Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.
2017-08-01
Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.
Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie
2017-11-10
A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M
2018-01-01
Objectives To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Design Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Results Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more ‘real’ RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. Conclusion There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. PMID:29420230
Rowlands, Ceri; Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M
2018-02-02
To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more 'real' RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A world without bacterial meningitis: how genomic epidemiology can inform vaccination strategy.
Rodrigues, Charlene M C; Maiden, Martin C J
2018-01-01
Bacterial meningitis remains an important cause of global morbidity and mortality. Although effective vaccinations exist and are being increasingly used worldwide, bacterial diversity threatens their impact and the ultimate goal of eliminating the disease. Through genomic epidemiology, we can appreciate bacterial population structure and its consequences for transmission dynamics, virulence, antimicrobial resistance, and development of new vaccines. Here, we review what we have learned through genomic epidemiological studies, following the rapid implementation of whole genome sequencing that can help to optimise preventative strategies for bacterial meningitis.
NASA Astrophysics Data System (ADS)
Kutsch, W. L.; Zhao, Z.; Hardisty, A.; Hellström, M.; Chin, Y.; Magagna, B.; Asmi, A.; Papale, D.; Pfeil, B.; Atkinson, M.
2017-12-01
Environmental Research Infrastructures (ENVRIs) are expected to become important pillars not only for supporting their own scientific communities, but also a) for inter-disciplinary research and b) for the European Earth Observation Program Copernicus as a contribution to the Global Earth Observation System of Systems (GEOSS) or global thematic data networks. As such, it is very important that data-related activities of the ENVRIs will be well integrated. This requires common policies, models and e-infrastructure to optimise technological implementation, define workflows, and ensure coordination, harmonisation, integration and interoperability of data, applications and other services. The key is interoperating common metadata systems (utilising a richer metadata model as the `switchboard' for interoperation with formal syntax and declared semantics). The metadata characterises data, services, users and ICT resources (including sensors and detectors). The European Cluster Project ENVRIplus has developed a reference model (ENVRI RM) for common data infrastructure architecture to promote interoperability among ENVRIs. The presentation will provide an overview of recent progress and give examples for the integration of ENVRI data in global integration networks.
NASA Astrophysics Data System (ADS)
Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.
2015-10-01
In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
Sharma, Shivani; Khanna, P K; Kapoor, S
2016-01-01
Mycelial growth in a defined medium by submerged fermentation is a rapid and alternative method for obtaining fungal biomass of consistent quality. Biomass, exopolysaccharides (EPS) and intracellular polysaccharides (IPS) production were optimised by response surface methodology in Lentinula edodes strain LeS (NCBI JX915793). The optimised conditions were pH 5.0, temperature 26°C, incubation period of 25 days and agitation rate of 52 r/min for L. edodes strain LeS. Under the calculated optimal culture conditions, biomass production (5.88 mg mL(-1)), EPS production (0.40 mg mL(-1)) and IPS production (12.45 mg g(-1)) were in agreement with the predicted values for biomass (5.93 mg mL(-1)), EPS (0.55 mg mL(-1)) and IPS production (12.64 mg g(-1)). Crude lentinan exhibited highest antibacterial effects followed by alcoholic, crude and aqueous extracts. The results obtained may be useful for highly effective yield of biomass and bioactive metabolites.
Mediani, Ahmed; Abas, Faridah; Khatib, Alfi; Tan, Chin Ping
2013-08-29
The aim of the study was to analyze the influence of oven thermal processing of Cosmos caudatus on the total polyphenolic content (TPC) and antioxidant capacity (DPPH) of two different solvent extracts (80% methanol, and 80% ethanol). Sonication was used to extract bioactive compounds from this herb. The results showed that the optimised conditions for the oven drying method for 80% methanol and 80% ethanol were 44.5 °C for 4 h with an IC₅₀ of 0.045 mg/mL and 43.12 °C for 4.05 h with an IC₅₀ of 0.055 mg/mL, respectively. The predicted values for TPC under the optimised conditions for 80% methanol and 80% ethanol were 16.5 and 15.8 mg GAE/100 g DW, respectively. The results obtained from this study demonstrate that Cosmos caudatus can be used as a potential source of antioxidants for food and medicinal applications.
Kershaw, Stephen; Cummings, Jeffrey; Morris, Karen; Tugwood, Jonathan; Dive, Caroline
2015-05-10
The monocarboxylate transporter-1 (MCT1) represents a novel target in rational anticancer drug design while AZD3965 was developed as an inhibitor of this transporter and is undergoing Phase I clinical trials ( http://www.clinicaltrials.gov/show/NCT01791595 ). We describe the optimisation of an immunofluorescence (IF) method for determination of MCT1 and MCT4 in circulating tumour cells (CTC) as potential prognostic and predictive biomarkers of AZD3965 in cancer patients. Antibody selectivity was investigated by western blotting (WB) in K562 and MDAMB231 cell lines acting as positive controls for MCT1 and MCT4 respectively and by flow cytometry also employing the control cell lines. Ability to detect MCT1 and MCT4 in CTC as a 4(th) channel marker utilising the Veridex™ CellSearch system was conducted in both human volunteer blood spiked with control cells and in samples collected from small cell lung cancer (SCLC) patients. Experimental conditions were established which yielded a 10-fold dynamic range (DR) for detection of MCT1 over MCT4 (antibody concentration 6.25 μg/mL; integration time 0.4 seconds) and a 5-fold DR of MCT4 over MCT 1 (8 μg/100 μL and 0.8 seconds). The IF method was sufficiently sensitive to detect both MCT1 and MCT4 in CTCs harvested from cancer patients. The first IF method has been developed and optimised for detection of MCT 1 and MCT4 in cancer patient CTC.
Mokhtarzadeh, Hossein; Perraton, Luke; Fok, Laurence; Muñoz, Mario A; Clark, Ross; Pivonka, Peter; Bryant, Adam L
2014-09-22
The aim of this paper was to compare the effect of different optimisation methods and different knee joint degrees of freedom (DOF) on muscle force predictions during a single legged hop. Nineteen subjects performed single-legged hopping manoeuvres and subject-specific musculoskeletal models were developed to predict muscle forces during the movement. Muscle forces were predicted using static optimisation (SO) and computed muscle control (CMC) methods using either 1 or 3 DOF knee joint models. All sagittal and transverse plane joint angles calculated using inverse kinematics or CMC in a 1 DOF or 3 DOF knee were well-matched (RMS error<3°). Biarticular muscles (hamstrings, rectus femoris and gastrocnemius) showed more differences in muscle force profiles when comparing between the different muscle prediction approaches where these muscles showed larger time delays for many of the comparisons. The muscle force magnitudes of vasti, gluteus maximus and gluteus medius were not greatly influenced by the choice of muscle force prediction method with low normalised root mean squared errors (<48%) observed in most comparisons. We conclude that SO and CMC can be used to predict lower-limb muscle co-contraction during hopping movements. However, care must be taken in interpreting the magnitude of force predicted in the biarticular muscles and the soleus, especially when using a 1 DOF knee. Despite this limitation, given that SO is a more robust and computationally efficient method for predicting muscle forces than CMC, we suggest that SO can be used in conjunction with musculoskeletal models that have a 1 or 3 DOF knee joint to study the relative differences and the role of muscles during hopping activities in future studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.
2016-03-01
Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.
Fractures in sport: Optimising their management and outcome
Robertson, Greg AJ; Wood, Alexander M
2015-01-01
Fractures in sport are a specialised cohort of fracture injuries, occurring in a high functioning population, in which the goals are rapid restoration of function and return to play with the minimal symptom profile possible. While the general principles of fracture management, namely accurate fracture reduction, appropriate immobilisation and timely rehabilitation, guide the treatment of these injuries, management of fractures in athletic populations can differ significantly from those in the general population, due to the need to facilitate a rapid return to high demand activities. However, despite fractures comprising up to 10% of all of sporting injuries, dedicated research into the management and outcome of sport-related fractures is limited. In order to assess the optimal methods of treating such injuries, and so allow optimisation of their outcome, the evidence for the management of each specific sport-related fracture type requires assessment and analysis. We present and review the current evidence directing management of fractures in athletes with an aim to promote valid innovative methods and optimise the outcome of such injuries. From this, key recommendations are provided for the management of the common fracture types seen in the athlete. Six case reports are also presented to illustrate the management planning and application of sport-focussed fracture management in the clinical setting. PMID:26716081
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
Soh, Josephine Lay Peng; Grachet, Maud; Whitlock, Mark; Lukas, Timothy
2013-02-01
This is a study to fully assess a commercially available co-processed mannitol for its usefulness as an off-the-shelf excipient for developing orally disintegrating tablets (ODTs) by direct compression on a pilot scale (up to 4 kg). This work encompassed material characterization, formulation optimisation and process robustness. Overall, this co-processed mannitol possessed favourable physical attributes including low hygroscopicity and compactibility. Two design-of-experiments (DoEs) were used to screen and optimise the placebo formulation. Xylitol and crospovidone concentrations were found to have the most significant impact on disintegration time (p < 0.05). Higher xylitol concentrations retarded disintegration. Avicel PH102 promoted faster disintegration than PH101, at higher levels of xylitol. Without xylitol, higher crospovidone concentrations yielded faster disintegration and reduced tablet friability. Lubrication sensitivity studies were later conducted at two fill loads, three levels for lubricant concentration and number of blend rotations. Even at 75% fill load, the design space plot showed that 1.5% lubricant and 300 blend revolutions were sufficient to manufacture ODTs with ≤ 0.1% friability and disintegrated within 15 s. This study also describes results using a modified disintegration method based on the texture analyzer as an alternative to the USP method.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Learning Content and Software Evaluation and Personalisation Problems
ERIC Educational Resources Information Center
Kurilovas, Eugenijus; Serikoviene, Silvija
2010-01-01
The paper aims to analyse several scientific approaches how to evaluate, implement or choose learning content and software suitable for personalised users/learners needs. Learning objects metadata customisation method as well as the Method of multiple criteria evaluation and optimisation of learning software represented by the experts' additive…
Rodriguez-Nogales, J M; Garcia, M C; Marina, M L
2006-02-03
A perfusion reversed-phase high performance liquid chromatography (RP-HPLC) method has been designed to allow rapid (3.4 min) separations of maize proteins with high resolution. Several factors, such as extraction conditions, temperature, detection wavelength and type and concentration of ion-pairing agent were optimised. A fine optimisation of the gradient elution was also performed by applying experimental design. Commercial maize products for human consumption (flours, precocked flours, fried snacks and extruded snacks) were characterised for the first time by perfusion RP-HPLC and their chromatographic profiles allowed a differentiation among products relating the different technological process used for their preparation. Furthermore, applying discriminant analysis makes it possible to group the samples according with the technological process suffered by maize products, obtaining a good prediction in 92% of the samples.
Streese-Kleeberg, Jan; Rachor, Ingke; Gebert, Julia; Stegmann, Rainer
2011-05-01
In order to optimise methane oxidation in landfill cover soils, it is important to be able to accurately quantify the amount of methane oxidised. This research considers the gas push-pull test (GPPT) as a possible method to quantify oxidation rates in situ. During a GPPT, a gas mixture consisting of one or more reactive gases (e.g., CH(4), O(2)) and one or more conservative tracers (e.g., argon), is injected into the soil. Following this, the mixture of injected gas and soil air is extracted from the same location and periodically sampled. The kinetic parameters for the biological oxidation taking place in the soil can be derived from the differences in the breakthrough curves. The original method of Urmann et al. (2005) was optimised for application in landfill cover soils and modified to reduce the analytical effort required. Optimised parameters included the flow rate during the injection phase and the duration of the experiment. 50 GPPTs have been conducted at different landfills in Germany during different seasons. Generally, methane oxidation rates ranged between 0 and 150 g m(soil air)(-3)h(-1). At one location, rates up to 440 g m(soil air)(-3)h(-1) were measured under particularly favourable conditions. The method is simple in operation and does not require expensive equipment besides standard laboratory gas chromatographs. Copyright © 2010 Elsevier Ltd. All rights reserved.
Derimay, François; Souteyrand, Geraud; Motreff, Pascal; Rioufol, Gilles; Finet, Gerard
2017-10-13
The rePOT (proximal optimisation technique) sequence proved significantly more effective than final kissing balloon (FKB) with two drug-eluting stents (DES) in a bench test. We sought to validate efficacy experimentally in a large range of latest-generation DES. On left main fractal coronary bifurcation bench models, five samples of each of the six main latest-generation DES (Coroflex ISAR, Orsiro, Promus PREMIER, Resolute Integrity, Ultimaster, XIENCE Xpedition) were implanted on rePOT (initial POT, side branch inflation, final POT). Proximal elliptical ratio, side branch obstruction (SBO), stent overstretch and strut malapposition were quantified on 2D and 3D OCT. Results were compared to FKB with Promus PREMIER. Whatever the design, rePOT maintained vessel circularity compared to FKB: elliptical ratio, 1.02±0.01 to 1.04±0.01 vs. 1.26±0.02 (p<0.05). Global strut malapposition was much lower: 2.6±1.4% to 0.1±0.2% vs. 40.4±8.4% for FKB (p<0.05). However, only Promus PREMIER and XIENCE Xpedition achieved significantly less SBO: respectively, 5.6±3.5% and 10.0±5.3% vs. 23.5±5.7% for FKB (p<0.05). Platform design differences had little influence on the excellent results of rePOT versus FKB. RePOT optimised strut apposition without proximal elliptical deformation in the six main latest-generation DES. Thickness and design characteristics seemed relevant for optimising SBO.
Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels
NASA Astrophysics Data System (ADS)
Vandersickel, A.; Wright, Y. M.; Boulouchos, K.
2013-12-01
Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.
Reyes, Antonio Jose; Ramcharan, Kanterpersad
2016-08-02
We report a patient driven home care system that successfully assisted 24/7 with the management of a 68-year-old woman after a stroke-a global illness. The patient's caregiver and physician used computer devices, smartphones and internet access for information exchange. Patient, caregiver, family and physician satisfaction, coupled with outcome and cost were indictors of quality of care. The novelty of this basic model of teleneurology is characterised by implementing a patient/caregiver driven system designed to improve access to cost-efficient neurological care, which has potential for use in primary, secondary and tertiary levels of healthcare in rural and underserved regions of the world. We suggest involvement of healthcare stakeholders in teleneurology to address this global problem of limited access to neurological care. This model can facilitate the management of neurological diseases, impact on outcome, reduce frequency of consultations and hospitalisations, facilitate teaching of healthcare workers and promote research. 2016 BMJ Publishing Group Ltd.
Oral health promotion and education messages in Live.Learn.Laugh. projects.
Horn, Virginie; Phantumvanit, Prathip
2014-10-01
The FDI-Unilever Live.Learn.Laugh. phase 2 partnership involved dissemination of the key oral health message of encouraging 'twice-daily toothbrushing with fluoride toothpaste' and education of people worldwide by FDI, National Dental Associations, the Unilever Oral Care global team and local brands. The dissemination and education process used different methodologies, each targeting specific groups, namely: mother and child (Project option A); schoolchildren (Project option B); dentists and patients (Project option C); and specific communities (Project option D). Altogether, the partnership implemented 29 projects in 27 countries. These consisted of educational interventions, evaluations including (in some cases) clinical assessment, together with communication activities at both global and local levels, to increase the reach of the message to a broader population worldwide. The phase 2 experience reveals the strength of such a public-private partnership approach in tackling global oral health issues by creating synergies between partners and optimising the promotion and education process. © 2014 FDI World Dental Federation.
Quail, Michael A; Gu, Yong; Swerdlow, Harold; Mayho, Matthew
2012-12-01
Size selection can be a critical step in preparation of next-generation sequencing libraries. Traditional methods employing gel electrophoresis lack reproducibility, are labour intensive, do not scale well and employ hazardous interchelating dyes. In a high-throughput setting, solid-phase reversible immobilisation beads are commonly used for size-selection, but result in quite a broad fragment size range. We have evaluated and optimised the use of two semi-automated preparative DNA electrophoresis systems, the Caliper Labchip XT and the Sage Science Pippin Prep, for size selection of Illumina sequencing libraries. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S
2015-01-01
Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
Influence of some design parameters on the thermal performance of domestic refrigerator appliances
NASA Astrophysics Data System (ADS)
Rebora, Alessandro; Senarega, Maurizio; Tagliafico, Luca A.
2006-07-01
This paper presents a thermal study on chest-freezers, the small refrigerators used in domestic and supermarket applications. A thermal and energy model of a particular kind of these refrigerators, the “hot-wall” (or “skin condenser”) refrigerator, is developed and used to perform sensitivity and design optimisation analysis for given working temperatures and useful volume of the refrigerated cell. A finite-element heat transfer model of the refrigerator box is coupled to the complete thermodynamic model of the refrigerating plant, including real working conditions (compressor efficiency, friction pressure losses and so on). A sensitivity study of the main design parameters affecting the global refrigerator performance has been developed (for fixed working temperatures) with reference to the thickness of the metallic plates, to the evaporator and condenser tube diameters and to the evaporator tube pitch (with fixed evaporator-to-condenser tube pitch ratio). The results obtained show that the proposed sensitivity analysis can yield quite reliable results (in comparison with much more complex, albeit more accurate mathematical optimisation algorithms) using small computational resources. The great importance of 2-D heat conduction in the metallic plates is shown, evidencing how the plate thickness and the evaporator and condenser tube diameters affect the global performance of the system according to the well-known “fin efficiency” effect. The influence of the evaporator and condenser tube diameters on the friction pressure losses is also outlined. Some practical suggestions are made in conclusion, regarding the criteria which should be adopted in the thermal design of a hot-wall refrigerator.
Vozeh, S; Steimer, J L
1985-01-01
The concept of feedback control methods for drug dosage optimisation is described from the viewpoint of control theory. The control system consists of 5 parts: (a) patient (the controlled process); (b) response (the measured feedback); (c) model (the mathematical description of the process); (d) adaptor (to update the parameters); and (e) controller (to determine optimum dosing strategy). In addition to the conventional distinction between open-loop and closed-loop control systems, a classification is proposed for dosage optimisation techniques which distinguishes between tight-loop and loose-loop methods depending on whether physician's interaction is absent or included as part of the control step. Unlike engineering problems where the process can usually be controlled by fully automated devices, therapeutic situations often require that the physician be included in the decision-making process to determine the 'optimal' dosing strategy. Tight-loop and loose-loop methods can be further divided into adaptive and non-adaptive, depending on the presence of the adaptor. The main application areas of tight-loop feedback control methods are general anaesthesia, control of blood pressure, and insulin delivery devices. Loose-loop feedback methods have been used for oral anticoagulation and in therapeutic drug monitoring. The methodology, advantages and limitations of the different approaches are reviewed. A general feature common to all application areas could be observed: to perform well under routine clinical conditions, which are characterised by large interpatient variability and sometimes also intrapatient changes, control systems should be adaptive. Apart from application in routine drug treatment, feedback control methods represent an important research tool. They can be applied for the investigation of pathophysiological and pharmacodynamic processes. A most promising application is the evaluation of the relationship between an intermediate response (e.g. drug level), which is often used as feedback for dosage adjustment, and the final therapeutic goal.
O'Brien, Rosaleen; Fitzpatrick, Bridie; Higgins, Maria; Guthrie, Bruce; Watt, Graham; Wyke, Sally
2016-01-01
Objectives To develop and optimise a primary care-based complex intervention (CARE Plus) to enhance the quality of life of patients with multimorbidity in the deprived areas. Methods Six co-design discussion groups involving 32 participants were held separately with multimorbid patients from the deprived areas, voluntary organisations, general practitioners and practice nurses working in the deprived areas. This was followed by piloting in two practices and further optimisation based on interviews with 11 general practitioners, 2 practice nurses and 6 participating multimorbid patients. Results Participants endorsed the need for longer consultations, relational continuity and a holistic approach. All felt that training and support of the health care staff was important. Most participants welcomed the idea of additional self-management support, though some practitioners were dubious about whether patients would use it. The pilot study led to changes including a revised care plan, the inclusion of mindfulness-based stress reduction techniques in the support of practitioners and patients, and the stream-lining of the written self-management support material for patients. Discussion We have co-designed and optimised an augmented primary care intervention involving a whole-system approach to enhance quality of life in multimorbid patients living in the deprived areas. CARE Plus will next be tested in a phase 2 cluster randomised controlled trial. PMID:27068113
Badham, George E; Dos Santos, Scott J; Lloyd, Lucinda Ba; Holdstock, Judy M; Whiteley, Mark S
2018-06-01
Background In previous in vitro and ex vivo studies, we have shown increased thermal spread can be achieved with radiofrequency-induced thermotherapy when using a low power and slower, discontinuous pullback. We aimed to determine the clinical success rate of radiofrequency-induced thermotherapy using this optimised protocol for the treatment of superficial venous reflux in truncal veins. Methods Sixty-three patients were treated with radiofrequency-induced thermotherapy using the optimised protocol and were followed up after one year (mean 16.3 months). Thirty-five patients returned for audit, giving a response rate of 56%. Duplex ultrasonography was employed to check for truncal reflux and compared to initial scans. Results In the 35 patients studied, there were 48 legs, with 64 truncal veins treated by radiofrequency-induced thermotherapy (34 great saphenous, 15 small saphenous and 15 anterior accessory saphenous veins). One year post-treatment, complete closure of all previously refluxing truncal veins was demonstrated on ultrasound, giving a success rate of 100%. Conclusions Using a previously reported optimised, low power/slow pullback radiofrequency-induced thermotherapy protocol, we have shown it is possible to achieve a 100% ablation at one year. This compares favourably with results reported at one year post-procedure using the high power/fast pullback protocols that are currently recommended for this device.
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-01-01
Purpose: The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Materials and Methods: Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 23 full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Results: Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug–excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. Conclusion: It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease. PMID:23580933
Multistrategy Self-Organizing Map Learning for Classification Problems
Hasan, S.; Shamsuddin, S. M.
2011-01-01
Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test. PMID:21876686
Nevado, Juan José Berzas; Robledo, Virginia Rodríguez; Callado, Carolina Sánchez-Carnerero
2012-07-15
The enrichment of virgin olive oil (VOO) with natural antioxidants contained in various herbs (rosemary, thyme and oregano) was studied. Three different enrichment procedures were used for the solid-liquid extraction of antioxidants present in the herbs to VOO. One involved simply bringing the herbs into contact with the VOO for 190 days; another keeping the herb-VOO mixture under stirring at room temperature (25°C) for 11 days; and the third stirring at temperatures above room level (35-40°C). The efficiency of each procedure was assessed by using a reproducible, efficient, reliable analytical capillary zone electrophoresis (CZE) method to separate and determine selected phenolic compounds (rosmarinic and caffeic acid) in the oil. Prior to electrophoretic separation, the studied antioxidants were isolated from the VOO matrix by using an optimised preconcentration procedure based on solid phase extraction (SPE). The CZE method was optimised and validated. Copyright © 2012 Elsevier Ltd. All rights reserved.
H2/H∞ control for grid-feeding converter considering system uncertainty
NASA Astrophysics Data System (ADS)
Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang
2017-05-01
Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.
Very high frame rate volumetric integration of depth images on mobile devices.
Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David
2015-11-01
Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.
NASA Astrophysics Data System (ADS)
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
Sentinel-3 coverage-driven mission design: Coupling of orbit selection and instrument design
NASA Astrophysics Data System (ADS)
Cornara, S.; Pirondini, F.; Palmade, J. L.
2017-11-01
The first satellite of the Sentinel-3 series was launched in February 2016. Sentinel-3 payload suite encompasses the Ocean and Land Colour Instrument (OLCI) with a swath of 1270 km, the Sea and Land Surface Temperature Radiometer (SLSTR) yielding a dual-view scan with swaths of 1420 km (nadir) and 750 km (oblique view), the Synthetic Aperture Radar Altimeter (SRAL) working in Ku-band and C-band, and the dual-frequency Microwave Radiometer (MWR). In the early stages of mission and system design, the main driver for the Sentinel-3 reference orbit selection was the requirement to achieve a revisit time of two days or less globally over ocean areas with two satellites (i.e. 4-day global coverage with one satellite). The orbit selection was seamlessly coupled with the OLCI instrument design in terms of field of view (FoV) definition driven by the observation zenith angle (OZA) and sunglint constraints applied to ocean observations. The criticality of the global coverage requirement for ocean monitoring derives from the sunglint phenomenon, i.e. the impact on visible channels of the solar ray reflection on the water surface. This constraint was finally overcome thanks to the concurrent optimisation of the orbit parameters, notably the Local Time at Descending Node (LTDN), and the OLCI instrument FoV definition. The orbit selection process started with the identification of orbits with short repeat cycle (2-4 days), firstly to minimise the time required to achieve global coverage with existing constraints, and then to minimise the swath required to obtain global coverage and the maximum required OZA. This step yielded the selection of a 4-day repeat cycle orbit, thus allowing 2-day coverage with two adequately spaced satellites. Then suitable candidate orbits with higher repeat cycles were identified in the proximity of the selected altitudes and the reference orbit was ultimately chosen. Rationale was to keep the swath for global coverage as close as possible to the previous optimum value, but to tailor the repeat cycle length (i.e. the ground-track grid) to optimise the topography mission performances. The final choice converged on the sun-synchronous orbit 14 + 7/27, reference altitude ∼800 km, LTDN = 10h00. Extensive coverage analyses were carried out to characterise the mission performance and the fulfilment of the requirements, encompassing revisit time, number of acquisitions, observation viewing geometry and swath properties. This paper presents a comprehensive overview of the Sentinel-3 orbit selection, starting from coverage requirements and highlighting the close interaction with the instrument design activity.
Tan, Weng Chun; Mat Isa, Nor Ashidi
2016-01-01
In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.
Radiation exposure in X-ray-based imaging techniques used in osteoporosis
Adams, Judith E.; Guglielmi, Giuseppe; Link, Thomas M.
2010-01-01
Recent advances in medical X-ray imaging have enabled the development of new techniques capable of assessing not only bone quantity but also structure. This article provides (a) a brief review of the current X-ray methods used for quantitative assessment of the skeleton, (b) data on the levels of radiation exposure associated with these methods and (c) information about radiation safety issues. Radiation doses associated with dual-energy X-ray absorptiometry are very low. However, as with any X-ray imaging technique, each particular examination must always be clinically justified. When an examination is justified, the emphasis must be on dose optimisation of imaging protocols. Dose optimisation is more important for paediatric examinations because children are more vulnerable to radiation than adults. Methods based on multi-detector CT (MDCT) are associated with higher radiation doses. New 3D volumetric hip and spine quantitative computed tomography (QCT) techniques and high-resolution MDCT for evaluation of bone structure deliver doses to patients from 1 to 3 mSv. Low-dose protocols are needed to reduce radiation exposure from these methods and minimise associated health risks. PMID:20559834
Optimisation of quantitative lung SPECT applied to mild COPD: a software phantom simulation study.
Norberg, Pernilla; Olsson, Anna; Alm Carlsson, Gudrun; Sandborg, Michael; Gustafsson, Agnetha
2015-01-01
The amount of inhomogeneities in a (99m)Tc Technegas single-photon emission computed tomography (SPECT) lung image, caused by reduced ventilation in lung regions affected by chronic obstructive pulmonary disease (COPD), is correlated to disease advancement. A quantitative analysis method, the CVT method, measuring these inhomogeneities was proposed in earlier work. To detect mild COPD, which is a difficult task, optimised parameter values are needed. In this work, the CVT method was optimised with respect to the parameter values of acquisition, reconstruction and analysis. The ordered subset expectation maximisation (OSEM) algorithm was used for reconstructing the lung SPECT images. As a first step towards clinical application of the CVT method in detecting mild COPD, this study was based on simulated SPECT images of an advanced anthropomorphic lung software phantom including respiratory and cardiac motion, where the mild COPD lung had an overall ventilation reduction of 5%. The best separation between healthy and mild COPD lung images as determined using the CVT measure of ventilation inhomogeneity and 125 MBq (99m)Tc was obtained using a low-energy high-resolution collimator (LEHR) and a power 6 Butterworth post-filter with a cutoff frequency of 0.6 to 0.7 cm(-1). Sixty-four reconstruction updates and a small kernel size should be used when the whole lung is analysed, and for the reduced lung a greater number of updates and a larger kernel size are needed. A LEHR collimator and 125 (99m)Tc MBq together with an optimal combination of cutoff frequency, number of updates and kernel size, gave the best result. Suboptimal selections of either cutoff frequency, number of updates and kernel size will reduce the imaging system's ability to detect mild COPD in the lung phantom.
NASA Astrophysics Data System (ADS)
Del Vescovo, D.; D'Ambrogio, W.
1995-01-01
A frequency domain method is presented to design a closed-loop control for vibration reduction flexible mechanisms. The procedure is developed on a single-link flexible arm, driven by one rotary degree of freedom servomotor, although the same technique may be applied to similar systems such as supports for aerospace antennae or solar panels. The method uses the structural frequency response functions (FRFs), thus avoiding system identification, that produces modeling uncertainties. Two closed-loops are implemented: the inner loop uses acceleration feedback with the aim of making the FRF similar to that of an equivalent rigid link; the outer loop feeds back displacements to achieve a fast positioning response and null steady state error. In both cases, the controller type is established a priori, while actual characteristics are defined by an optimisation procedure in which the relevant FRF is constrained into prescribed bounds and stability is taken into account.
Simplex-centroid mixture formulation for optimised composting of kitchen waste.
Abdullah, N; Chin, N L
2010-11-01
Composting is a good recycling method to fully utilise all the organic wastes present in kitchen waste due to its high nutritious matter within the waste. In this present study, the optimised mixture proportions of kitchen waste containing vegetable scraps (V), fish processing waste (F) and newspaper (N) or onion peels (O) were determined by applying the simplex-centroid mixture design method to achieve the desired initial moisture content and carbon-to-nitrogen (CN) ratio for effective composting process. The best mixture was at 48.5% V, 17.7% F and 33.7% N for blends with newspaper while for blends with onion peels, the mixture proportion was 44.0% V, 19.7% F and 36.2% O. The predicted responses from these mixture proportions fall in the acceptable limits of moisture content of 50% to 65% and CN ratio of 20-40 and were also validated experimentally. Copyright 2010 Elsevier Ltd. All rights reserved.
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
Almén, Anja; Båth, Magnus
2016-06-01
The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Turner, Andrew D; Waack, Julia; Lewis, Adam; Edwards, Christine; Lawton, Linda
2018-02-01
A simple, rapid UHPLC-MS/MS method has been developed and optimised for the quantitation of microcystins and nodularin in wide variety of sample matrices. Microcystin analogues targeted were MC-LR, MC-RR, MC-LA, MC-LY, MC-LF, LC-LW, MC-YR, MC-WR, [Asp3] MC-LR, [Dha7] MC-LR, MC-HilR and MC-HtyR. Optimisation studies were conducted to develop a simple, quick and efficient extraction protocol without the need for complex pre-analysis concentration procedures, together with a rapid sub 5min chromatographic separation of toxins in shellfish and algal supplement tablet powders, as well as water and cyanobacterial bloom samples. Validation studies were undertaken on each matrix-analyte combination to the full method performance characteristics following international guidelines. The method was found to be specific and linear over the full calibration range. Method sensitivity in terms of limits of detection, quantitation and reporting were found to be significantly improved in comparison to LC-UV methods and applicable to the analysis of each of the four matrices. Overall, acceptable recoveries were determined for each of the matrices studied, with associated precision and within-laboratory reproducibility well within expected guidance limits. Results from the formalised ruggedness analysis of all available cyanotoxins, showed that the method was robust for all parameters investigated. The results presented here show that the optimised LC-MS/MS method for cyanotoxins is fit for the purpose of detection and quantitation of a range of microcystins and nodularin in shellfish, algal supplement tablet powder, water and cyanobacteria. The method provides a valuable early warning tool for the rapid, routine extraction and analysis of natural waters, cyanobacterial blooms, algal powders, food supplements and shellfish tissues, enabling monitoring labs to supplement traditional microscopy techniques and report toxicity results within a short timeframe of sample receipt. The new method, now accredited to ISO17025 standard, is simple, quick, applicable to multiple matrices and is highly suitable for use as a routine, high-throughout, fast turnaround regulatory monitoring tool. Copyright © 2017 Elsevier B.V. All rights reserved.
Modelling soil water retention using support vector machines with genetic algorithm optimisation.
Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L
2014-01-01
This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
“Key to the Future”: British American Tobacco and Cigarette Smuggling in China
Lee, Kelley; Collin, Jeff
2006-01-01
Background Cigarette smuggling is a major public health issue, stimulating increased tobacco consumption and undermining tobacco control measures. China is the ultimate prize among tobacco's emerging markets, and is also believed to have the world's largest cigarette smuggling problem. Previous work has demonstrated the complicity of British American Tobacco (BAT) in this illicit trade within Asia and the former Soviet Union. Methods and Findings This paper analyses internal documents of BAT available on site from the Guildford Depository and online from the BAT Document Archive. Documents dating from the early 1900s to 2003 were searched and indexed on a specially designed project database to enable the construction of an historical narrative. Document analysis incorporated several validation techniques within a hermeneutic process. This paper describes the huge scale of this illicit trade in China, amounting to billions of (United States) dollars in sales, and the key supply routes by which it has been conducted. It examines BAT's efforts to optimise earnings by restructuring operations, and controlling the supply chain and pricing of smuggled cigarettes. Conclusions Our research shows that smuggling has been strategically critical to BAT's ongoing efforts to penetrate the Chinese market, and to its overall goal to become the leading company within an increasingly global industry. These findings support the need for concerted efforts to strengthen global collaboration to combat cigarette smuggling. PMID:16834455
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
Variability estimation of urban wastewater biodegradable fractions by respirometry.
Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie
2005-11-01
This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.
Roosta, Mostafa; Ghaedi, Mehrorang; Daneshfar, Ali
2014-10-15
A novel approach, ultrasound-assisted reverse micelles dispersive liquid-liquid microextraction (USA-RM-DLLME) followed by high performance liquid chromatography (HPLC) was developed for selective determination of acetoin in butter. The melted butter sample was diluted and homogenised by n-hexane and Triton X-100, respectively. Subsequently, 400μL of distilled water was added and the microextraction was accelerated by 4min sonication. After 8.5min of centrifugation, sedimented phase (surfactant-rich phase) was withdrawn by microsyringe and injected into the HPLC system for analysis. The influence of effective variables was optimised using Box-Behnken design (BBD) combined with desirability function (DF). Under optimised experimental conditions, the calibration graph was linear over the range of 0.6-200mgL(-1). The detection limit of method was 0.2mgL(-1) and coefficient of determination was 0.9992. The relative standard deviations (RSDs) were less than 5% (n=5) while the recoveries were in the range of 93.9-107.8%. Copyright © 2014. Published by Elsevier Ltd.
Gómez-Romano, Fernando; Villanueva, Beatriz; Fernández, Jesús; Woolliams, John A; Pong-Wong, Ricardo
2016-01-13
Optimal contribution methods have proved to be very efficient for controlling the rates at which coancestry and inbreeding increase and therefore, for maintaining genetic diversity. These methods have usually relied on pedigree information for estimating genetic relationships between animals. However, with the large amount of genomic information now available such as high-density single nucleotide polymorphism (SNP) chips that contain thousands of SNPs, it becomes possible to calculate more accurate estimates of relationships and to target specific regions in the genome where there is a particular interest in maximising genetic diversity. The objective of this study was to investigate the effectiveness of using genomic coancestry matrices for: (1) minimising the loss of genetic variability at specific genomic regions while restricting the overall loss in the rest of the genome; or (2) maximising the overall genetic diversity while restricting the loss of diversity at specific genomic regions. Our study shows that the use of genomic coancestry was very successful at minimising the loss of diversity and outperformed the use of pedigree-based coancestry (genetic diversity even increased in some scenarios). The results also show that genomic information allows a targeted optimisation to maintain diversity at specific genomic regions, whether they are linked or not. The level of variability maintained increased when the targeted regions were closely linked. However, such targeted management leads to an important loss of diversity in the rest of the genome and, thus, it is necessary to take further actions to constrain this loss. Optimal contribution methods also proved to be effective at restricting the loss of diversity in the rest of the genome, although the resulting rate of coancestry was higher than the constraint imposed. The use of genomic matrices when optimising contributions permits the control of genetic diversity and inbreeding at specific regions of the genome through the minimisation of partial genomic coancestry matrices. The formula used to predict coancestry in the next generation produces biased results and therefore it is necessary to refine the theory of genetic contributions when genomic matrices are used to optimise contributions.
Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M
2017-05-01
The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Tobe, Shanan S; Bailey, Stuart; Govan, James; Welch, Lindsey A
2013-03-01
Although poaching is a common wildlife crime, the high and prohibitive cost of specialised animal testing means that many cases are left un-investigated. We previously described a novel approach to wildlife crime investigation that looked at the identification of human DNA on poached animal remains (Tobe, Govan and Welch, 2011). Human DNA was successfully isolated and amplified from simulated poaching incidents, however a low template protocol was required which made this method unsuitable for use in many laboratories. We now report on an optimised recovery and amplification protocol which removes the need for low template analysis. Samples from 10 deer (40 samples total - one from each leg) analysed in the original study were re-analysed in the current study with an additional 11 deer samples. Four samples analysed using Chelex did not show any results and a new method was devised whereby the available DNA was concentrated. By combining the DNA extracts from all tapings of the same deer remains followed by concentration, the recovered quantity of human DNA was found to be 29.5pg±43.2pg, 31× greater than the previous study. The use of the Investigator Decaplex SE (QIAGEN) STR kit provided better results in the form of more complete profiles than did the AmpFℓSTR® SGM Plus® kit at 30cycles (Applied Biosystems). Re-analysis of the samples from the initial study using the new, optimised protocol resulted in an average increase of 18% of recovered alleles. Over 17 samples, 71% of the samples analysed using the optimised protocol showed sufficient amplification for comparison to a reference profile and gave match probabilities ranging from 7.7690×10(-05) to 2.2706×10(-14). The removal of low template analysis means this optimised method provides evidence of high probative value and is suitable for immediate use in forensic laboratories. All methods and techniques used are standard and are compatible with current SOPs. As no high cost non-human DNA analysis is required the overall process is no more expensive than the investigation of other volume crime samples. The technique is suitable for immediate use in poaching incidents. Copyright © 2012 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Design Optimisation of a Magnetic Field Based Soft Tactile Sensor
Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert
2017-01-01
This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
USDA-ARS?s Scientific Manuscript database
Quantitative PCR (qPCR) can be used to detect and monitor pathogen colonization, but early attempts to apply the technology to Botrytis cinerea infection of grape berries have identified limitations to current techniques. In this study, four DNA extraction methods, two grinding methods, two grape or...
Efficient methods for enol phosphate synthesis using carbon-centred magnesium bases.
Kerr, William J; Lindsay, David M; Patel, Vipulkumar K; Rajamanickam, Muralikrishnan
2015-10-28
Efficient conversion of ketones into kinetic enol phosphates under mild and accessible conditions has been realised using the developed methods with di-tert-butylmagnesium and bismesitylmagnesium. Optimisation of the quench protocol resulted in high yields of enol phosphates from a range of cyclohexanones and aryl methyl ketones, with tolerance of a range of additional functional units.
The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management
Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo
2017-01-01
Abstract Background and Aims The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012–2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. Methods During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. Results The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Conclusions Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. PMID:28981622
The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.
2012-04-01
For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.
Statistical optimisation techniques in fatigue signal editing problem
NASA Astrophysics Data System (ADS)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.
2015-02-01
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.
Statistical optimisation techniques in fatigue signal editing problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less
Morton, Katherine; Band, Rebecca; van Woezik, Anne; Grist, Rebecca; McManus, Richard J.; Little, Paul; Yardley, Lucy
2018-01-01
Background For behaviour-change interventions to be successful they must be acceptable to users and overcome barriers to behaviour change. The Person-Based Approach can help to optimise interventions to maximise acceptability and engagement. This article presents a novel, efficient and systematic method that can be used as part of the Person-Based Approach to rapidly analyse data from development studies to inform intervention modifications. We describe how we used this approach to optimise a digital intervention for patients with hypertension (HOME BP), which aims to implement medication and lifestyle changes to optimise blood pressure control. Methods In study 1, hypertensive patients (N = 12) each participated in three think-aloud interviews, providing feedback on a prototype of HOME BP. In study 2 patients (N = 11) used HOME BP for three weeks and were then interviewed about their experiences. Studies 1 and 2 were used to identify detailed changes to the intervention content and potential barriers to engagement with HOME BP. In study 3 (N = 7) we interviewed hypertensive patients who were not interested in using an intervention like HOME BP to identify potential barriers to uptake, which informed modifications to our recruitment materials. Analysis in all three studies involved detailed tabulation of patient data and comparison to our modification criteria. Results Studies 1 and 2 indicated that the HOME BP procedures were generally viewed as acceptable and feasible, but also highlighted concerns about monitoring blood pressure correctly at home and making medication changes remotely. Patients in study 3 had additional concerns about the safety and security of the intervention. Modifications improved the acceptability of the intervention and recruitment materials. Conclusions This paper provides a detailed illustration of how to use the Person-Based Approach to refine a digital intervention for hypertension. The novel, efficient approach to analysis and criteria for deciding when to implement intervention modifications described here may be useful to others developing interventions. PMID:29723262
Hastings, Gareth D.; Marsack, Jason D.; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A.
2017-01-01
Purpose To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Methods Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. Results For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ±SD was −0.06 ±0.04 with both refractions; dilated was −0.05 ±0.04 with the objective, and −0.05 ±0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. Conclusions A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred over subjective refraction. Subjective refraction was preferred by habitually undercorrected hyperopic eyes. PMID:28370389
Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen
2017-01-01
Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766
Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz
2018-03-01
The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area
NASA Astrophysics Data System (ADS)
Khare, Vikas; Nema, Savita; Baredar, Prashant
2017-04-01
This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.
Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar
NASA Astrophysics Data System (ADS)
Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd
2017-05-01
Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.
The Flipped Classroom for pre-clinical dental skills teaching - a reflective commentary.
Crothers, A J; Bagg, J; McKerlie, R
2017-05-12
A Flipped Classroom method for teaching of adult practical pre-clinical dental skills was introduced to the BDS curriculum in Glasgow during the 2015/2016 academic session. This report provides a commentary of the first year of employing this method - from the identification of the need to optimise teaching resources, through the planning, implementation and development of the method, with an early indication of performance.
Restivo, Annalaura; Degano, Ilaria; Ribechini, Erika; Colombini, Maria Perla
2014-01-01
A method for the HPLC-MS/MS analysis of phenols, including phenolic acids and naphtoquinones, using an amide-embedded phase column was developed and compared to the literature methods based on classical C18 stationary phase columns. RP-Amide is a recently developed polar embedded stationary phase, whose wetting properties mean that up to 100% water can be used as an eluent. The increased retention and selectivity for polar compounds and the possibility of working in 100% water conditions make this column particularly interesting for the HPLC analysis of phenolic acids and derivatives. In this study, the chromatographic separation was optimised on an HPLC-DAD, and was used to separate 13 standard phenolic acids and derivatives. The method was validated on an HPLC-ESI-Q-ToF. The acquisition was performed in negative polarity and MS/MS target mode. Ionisation conditions and acquisition parameters for the Q-ToF detector were investigated by working on collision energies and fragmentor potentials. The performance of the method was fully evaluated on standards. Moreover, several raw materials containing phenols were analysed: walnut, gall, wine, malbec grape, French oak, red henna and propolis. Our method allowed us to characterize the phenolic composition in a wide range of matrices and to highlight possible matrix effects.
NASA Astrophysics Data System (ADS)
Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun
2017-07-01
In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.
Optimisation des structures métalliques fléchies dans un calcul plastique
NASA Astrophysics Data System (ADS)
Geara, F.; Raphael, W.; Kaddah, F.
2005-05-01
The steel structure is a type of construction that is very developed in civil engineering. In the phase of survey and then of execution and installation of a metal work, the phase of conception is often the place of discontinuities that prevents the global optimization of material steel. In our survey, we used the traditional approach of optimization that is essentially based on the minimization of the weight of the structure, while taking advantages of plastic properties of steel in the case of a bending structure. It has been permitted because of to the relation found between the areas of the sections of the steel elements and the plastic moment of these sections. These relations have been drawn for different types of steel. In order to take advantages of the linear programming, a simplification has been introduced in transforming these relation to linear relations, which permits us to use simple methods as the simplex theorem. This procedure proves to be very interesting in the first phases of the survey and give very interesting results.
An efficient fermentation method for the degradation of cyanogenic glycosides in flaxseed.
Wu, C-F; Xu, X-M; Huang, S-H; Deng, M-C; Feng, A-J; Peng, J; Yuan, J-P; Wang, J-H
2012-01-01
Recently, flaxseed has become increasingly popular in the health food market because it contains a considerable amount of specific beneficial nutrients such as lignans and omega-3 fatty acids. However, the presence of cyanogenic glycosides (CGs) in flaxseed severely limits the exploitation of its health benefits and nutritive value. We, therefore, developed an effective fermentation method, optimised by response surface methodology (RSM), for degrading CGs with an enzymatic preparation that includes 12.5% β-glucosidase and 8.9% cyanide hydratase. These optimised conditions resulted in a maximum CG degradation level of 99.3%, reducing the concentration of cyanide in the flaxseed power from 1.156 to 0.015 mg g(-1) after 48 h of fermentation. The avoidance of steam heat to evaporate hydrocyanic acid (HCN) results in lower energy consumption and no environmental pollution. In addition, the detoxified flaxseed retained the beneficial nutrients, lignans and fatty acids at the same level as untreated flaxseed, and this method could provide a new means of removing CGs from other edible plants, such as cassava, almond and sorghum by simultaneously expressing cyanide hydratase and β-glucosidase.
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
Breaking free from chemical spreadsheets.
Segall, Matthew; Champness, Ed; Leeding, Chris; Chisholm, James; Hunt, Peter; Elliott, Alex; Garcia-Martinez, Hector; Foster, Nick; Dowling, Samuel
2015-09-01
Drug discovery scientists often consider compounds and data in terms of groups, such as chemical series, and relationships, representing similarity or structural transformations, to aid compound optimisation. This is often supported by chemoinformatics algorithms, for example clustering and matched molecular pair analysis. However, chemistry software packages commonly present these data as spreadsheets or form views that make it hard to find relevant patterns or compare related compounds conveniently. Here, we review common data visualisation and analysis methods used to extract information from chemistry data. We introduce a new framework that enables scientists to work flexibly with drug discovery data to reflect their thought processes and interact with the output of algorithms to identify key structure-activity relationships and guide further optimisation intuitively. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Webb, Matthew; Tang, Hua
2016-08-01
In the past decade or two, due to constant and rapid technology changes, analog design re-use or design retargeting to newer technologies has been brought to the table in order to expedite the design process and improve time-to-market. If properly conducted, analog design retargeting could significantly cut down design cycle compared to designs starting from the scratch. In this article, we present an empirical and general method for efficient analog design retargeting by design knowledge re-use and circuit synthesis (CS). The method first identifies circuit blocks that compose the source system and extracts the performance parameter specifications of each circuit block. Then, for each circuit block, it scales the values of design variables (DV) from the source design to derive an initial design in the target technology. Depending on the performance of this initial target design, a design space is defined for synthesis. Subsequently, each circuit block is automatically synthesised using state-of-art analog synthesis tools based on a combination of global and local optimisation techniques to achieve comparable performance specifications to those extracted from the source system. Finally, the overall system is composed of those synthesised circuit blocks in the target technology. We illustrate the method using a practical example of a complex Delta-Sigma modulator (DSM) circuit.
McLellan, Iain; Hursthouse, Andrew; Morrison, Calum; Varela, Adélia; Pereira, Cristina Silva
2014-02-01
A major concern for the cork and wine industry is 'cork taint' which is associated with chloroanisoles, the microbial degradation metabolites of chlorophenols. The use of chlorophenolic compounds as pesticides within cork forests was prohibited in 1993 in the European Union (EU) following the introduction of industry guidance. However, cork produced outside the EU is still thought to be affected and simple, robust methods for chlorophenol analysis are required for wider environmental assessment by industry and local environmental regulators. Soil samples were collected from three common-use forests in Tunisia and from one privately owned forest in Sardinia, providing examples of varied management practice and degree of human intervention. These provided challenge samples for the optimisation of a HPLC-UV detection method. It produced recoveries consistently >75% against a soil CRM (ERM-CC008) for pentachlorophenol. The optimised method, with ultraviolet (diode array) detection is able to separate and quantify 16 different chlorophenols at field concentrations greater than the limits of detection ranging from 6.5 to 191.3 μg/kg (dry weight). Application to a range of field samples demonstrated the absence of widespread contamination in forest soils at sites sampled in Sardinia and Tunisia.
Optimisation of nasal swab analysis by liquid scintillation counting.
Dai, Xiongxin; Liblong, Aaron; Kramer-Tremblay, Sheila; Priest, Nicholas; Li, Chunsheng
2012-06-01
When responding to an emergency radiological incident, rapid methods are needed to provide the physicians and radiation protection personnel with an early estimation of possible internal dose resulting from the inhalation of radionuclides. This information is needed so that appropriate medical treatment and radiological protection control procedures can be implemented. Nasal swab analysis, which employs swabs swiped inside a nostril followed by liquid scintillation counting of alpha and beta activity on the swab, could provide valuable information to quickly identify contamination of the affected population. In this study, various parameters (such as alpha/beta discrimination, swab materials, counting time and volume of scintillation cocktail etc) were evaluated in order to optimise the effectiveness of the nasal swab analysis method. An improved nasal swab procedure was developed by replacing cotton swabs with polyurethane-tipped swabs. Liquid scintillation counting was performed using a Hidex 300SL counter with alpha/beta pulse shape discrimination capability. Results show that the new method is more reliable than existing methods using cotton swabs and effectively meets the analysis requirements for screening personnel in an emergency situation. This swab analysis procedure is also applicable to wipe tests of surface contamination to minimise the source self-absorption effect on liquid scintillation counting.
Cha, Thye San; Yee, Willy; Aziz, Ahmad
2012-04-01
The successful establishment of an Agrobacterium-mediated transformation method and optimisation of six critical parameters known to influence the efficacy of Agrobacterium T-DNA transfer in the unicellular microalga Chlorella vulgaris (UMT-M1) are reported. Agrobacterium tumefaciens strain LBA4404 harbouring the binary vector pCAMBIA1304 containing the gfp:gusA fusion reporter and a hygromycin phosphotransferase (hpt) selectable marker driven by the CaMV35S promoter were used for transformation. Transformation frequency was assessed by monitoring transient β-glucuronidase (GUS) expression 2 days post-infection. It was found that co-cultivation temperature at 24°C, co-cultivation medium at pH 5.5, 3 days of co-cultivation, 150 μM acetosyringone, Agrobacterium density of 1.0 units (OD(600)) and 2 days of pre-culture were optimum variables which produced the highest number of GUS-positive cells (8.8-20.1%) when each of these parameters was optimised individually. Transformation conducted with the combination of all optimal parameters above produced 25.0% of GUS-positive cells, which was almost a threefold increase from 8.9% obtained from un-optimised parameters. Evidence of transformation was further confirmed in 30% of 30 randomly-selected hygromycin B (20 mg L(-1)) resistant colonies by polymerase chain reaction (PCR) using gfp:gusA and hpt-specific primers. The developed transformation method is expected to facilitate the genetic improvement of this commercially-important microalga.
Probabilistic Sizing and Verification of Space Ceramic Structures
NASA Astrophysics Data System (ADS)
Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit
2012-07-01
Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.
Tredwin, Christopher J; Young, Anne M; Georgiou, George; Shin, Song-Hee; Kim, Hae-Won; Knowles, Jonathan C
2013-02-01
Currently, most titanium implant coatings are made using hydroxyapatite and a plasma spraying technique. There are however limitations associated with plasma spraying processes including poor adherence, high porosity and cost. An alternative method utilising the sol-gel technique offers many potential advantages but is currently lacking research data for this application. It was the objective of this study to characterise and optimise the production of Hydroxyapatite (HA), fluorhydroxyapatite (FHA) and fluorapatite (FA) using a sol-gel technique and assess the rheological properties of these materials. HA, FHA and FA were synthesised by a sol-gel method. Calcium nitrate and triethylphosphite were used as precursors under an ethanol-water based solution. Different amounts of ammonium fluoride (NH4F) were incorporated for the preparation of the sol-gel derived FHA and FA. Optimisation of the chemistry and subsequent characterisation of the sol-gel derived materials was carried out using X-ray Diffraction (XRD) and Differential Thermal Analysis (DTA). Rheology of the sol-gels was investigated using a viscometer and contact angle measurement. A protocol was established that allowed synthesis of HA, FHA and FA that were at least 99% phase pure. The more fluoride incorporated into the apatite structure; the lower the crystallisation temperature, the smaller the unit cell size (changes in the a-axis), the higher the viscosity and contact angle of the sol-gel derived apatite. A technique has been developed for the production of HA, FHA and FA by the sol-gel technique. Increasing fluoride substitution in the apatite structure alters the potential coating properties. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
Evaluation of FSK models for radiative heat transfer under oxyfuel conditions
NASA Astrophysics Data System (ADS)
Clements, Alastair G.; Porter, Rachael; Pranzitelli, Alessandro; Pourkashanian, Mohamed
2015-01-01
Oxyfuel is a promising technology for carbon capture and storage (CCS) applied to combustion processes. It would be highly advantageous in the deployment of CCS to be able to model and optimise oxyfuel combustion, however the increased concentrations of CO2 and H2O under oxyfuel conditions modify several fundamental processes of combustion, including radiative heat transfer. This study uses benchmark narrow band radiation models to evaluate the influence of assumptions in global full-spectrum k-distribution (FSK) models, and whether they are suitable for modelling radiation in computational fluid dynamics (CFD) calculations of oxyfuel combustion. The statistical narrow band (SNB) and correlated-k (CK) models are used to calculate benchmark data for the radiative source term and heat flux, which are then compared to the results calculated from FSK models. Both the full-spectrum correlated k (FSCK) and the full-spectrum scaled k (FSSK) models are applied using up-to-date spectral data. The results show that the FSCK and FSSK methods achieve good agreement in the test cases. The FSCK method using a five-point Gauss quadrature scheme is recommended for CFD calculations in oxyfuel conditions, however there are still potential inaccuracies in cases with very wide variations in the ratio between CO2 and H2O concentrations.
The development of response surface pathway design to reduce animal numbers in toxicity studies
2014-01-01
Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560
Automated identification and tracking of polar-cap plasma patches at solar minimum
NASA Astrophysics Data System (ADS)
Burston, R.; Hodges, K.; Astin, I.; Jayachandran, P. T.
2014-03-01
A method of automatically identifying and tracking polar-cap plasma patches, utilising data inversion and feature-tracking methods, is presented. A well-established and widely used 4-D ionospheric imaging algorithm, the Multi-Instrument Data Assimilation System (MIDAS), inverts slant total electron content (TEC) data from ground-based Global Navigation Satellite System (GNSS) receivers to produce images of the free electron distribution in the polar-cap ionosphere. These are integrated to form vertical TEC maps. A flexible feature-tracking algorithm, TRACK, previously used extensively in meteorological storm-tracking studies is used to identify and track maxima in the resulting 2-D data fields. Various criteria are used to discriminate between genuine patches and "false-positive" maxima such as the continuously moving day-side maximum, which results from the Earth's rotation rather than plasma motion. Results for a 12-month period at solar minimum, when extensive validation data are available, are presented. The method identifies 71 separate structures consistent with patch motion during this time. The limitations of solar minimum and the consequent small number of patches make climatological inferences difficult, but the feasibility of the method for patches larger than approximately 500 km in scale is demonstrated and a larger study incorporating other parts of the solar cycle is warranted. Possible further optimisation of discrimination criteria, particularly regarding the definition of a patch in terms of its plasma concentration enhancement over the surrounding background, may improve results.
Rotational degree-of-freedom synthesis: An optimised finite difference method for non-exact data
NASA Astrophysics Data System (ADS)
Gibbons, T. J.; Öztürk, E.; Sims, N. D.
2018-01-01
Measuring the rotational dynamic behaviour of a structure is important for many areas of dynamics such as passive vibration control, acoustics, and model updating. Specialist and dedicated equipment is often needed, unless the rotational degree-of-freedom is synthesised based upon translational data. However, this involves numerically differentiating the translational mode shapes to approximate the rotational modes, for example using a finite difference algorithm. A key challenge with this approach is choosing the measurement spacing between the data points, an issue which has often been overlooked in the published literature. The present contribution will for the first time prove that the use of a finite difference approach can be unstable when using non-exact measured data and a small measurement spacing, for beam-like structures. Then, a generalised analytical error analysis is used to propose an optimised measurement spacing, which balances the numerical error of the finite difference equation with the propagation error from the perturbed data. The approach is demonstrated using both numerical and experimental investigations. It is shown that by obtaining a small number of test measurements it is possible to optimise the measurement accuracy, without any further assumptions on the boundary conditions of the structure.
Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A
2016-01-07
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
Tang, Phooi Wah; Choon, Yee Wen; Mohamad, Mohd Saberi; Deris, Safaai; Napis, Suhaimi
2015-03-01
Metabolic engineering is a research field that focuses on the design of models for metabolism, and uses computational procedures to suggest genetic manipulation. It aims to improve the yield of particular chemical or biochemical products. Several traditional metabolic engineering methods are commonly used to increase the production of a desired target, but the products are always far below their theoretical maximums. Using numeral optimisation algorithms to identify gene knockouts may stall at a local minimum in a multivariable function. This paper proposes a hybrid of the artificial bee colony (ABC) algorithm and the minimisation of metabolic adjustment (MOMA) to predict an optimal set of solutions in order to optimise the production rate of succinate and lactate. The dataset used in this work was from the iJO1366 Escherichia coli metabolic network. The experimental results include the production rate, growth rate and a list of knockout genes. From the comparative analysis, ABCMOMA produced better results compared to previous works, showing potential for solving genetic engineering problems. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
A novel swarm intelligence algorithm for finding DNA motifs.
Lei, Chengwei; Ruan, Jianhua
2009-01-01
Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.
Infrastructure optimisation via MBR retrofit: a design guide.
Bagg, W K
2009-01-01
Wastewater management is continually evolving with the development and implementation of new, more efficient technologies. One of these is the Membrane Bioreactor (MBR). Although a relatively new technology in Australia, MBR wastewater treatment has been widely used elsewhere for over 20 years, with thousands of MBRs now in operation worldwide. Over the past 5 years, MBR technology has been enthusiastically embraced in Australia as a potential treatment upgrade option, and via retrofit typically offers two major benefits: (1) more capacity using mostly existing facilities, and (2) very high quality treated effluent. However, infrastructure optimisation via MBR retrofit is not a simple or low-cost solution and there are many factors which should be carefully evaluated before deciding on this method of plant upgrade. The paper reviews a range of design parameters which should be carefully evaluated when considering an MBR retrofit solution. Several actual and conceptual case studies are considered to demonstrate both advantages and disadvantages. Whilst optimising existing facilities and production of high quality water for reuse are powerful drivers, it is suggested that MBRs are perhaps not always the most sustainable Whole-of-Life solution for a wastewater treatment plant upgrade, especially by way of a retrofit.
Application of the adjoint optimisation of shock control bump for ONERA-M6 wing
NASA Astrophysics Data System (ADS)
Nejati, A.; Mazaheri, K.
2017-11-01
This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.
Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.
Garnavi, Rahil; Aldeen, Mohammad; Bailey, James
2012-11-01
This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.
NASA Astrophysics Data System (ADS)
Arjunan, V.; Santhanam, R.; Marchewka, M. K.; Mohan, S.; Yang, Haifeng
2015-11-01
Tapentadol is a novel opioid pain reliever drug with a dual mechanism of action, having potency between morphine and tramadol. Quantum chemical calculations have been carried out for tapentadol hydrochloride (TAP.Cl) to determine the properties. The geometry is optimised and the structural properties of the compound were determined from the optimised geometry by B3LYP method using 6-311++G(d,p), 6-31G(d,p) and cc-pVDZ basis sets. FT-IR and FT-Raman spectra are recorded in the solid phase in the region of 4000-400 and 4000-100 cm-1, respectively. Frontier molecular orbital energies, LUMO-HOMO energy gap, ionisation potential, electron affinity, electronegativity, hardness and chemical potential are also calculated. The stability of the molecule arising from hyperconjugative interactions and charge delocalisation has been analysed using NBO analysis. The 1H and 13C nuclear magnetic resonance chemical shifts of the molecule are analysed.
Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M
2018-03-01
This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
Aberl, A; Coelhan, M
2013-01-01
Sulfites are routinely added as preservatives and antioxidants in wine production. By law, the total sulfur dioxide content in wine is restricted and therefore must be monitored. Currently, the method of choice for determining the total content of sulfur dioxide in wine is the optimised Monier-Williams method, which is time consuming and laborious. The headspace gas chromatographic method described in this study offers a fast and reliable alternative method for the detection and quantification of the sulfur dioxide content in wine. The analysis was performed using an automatic headspace injection sampler, coupled with a gas chromatograph and an electron capture detector. The method is based on the formation of gaseous sulfur dioxide subsequent to acidification and heating of the sample. In addition to free sulfur dioxide, reversibly bound sulfur dioxide in carbonyl compounds, such as acetaldehyde, was also measured with this method. A total of 20 wine samples produced using diverse grape varieties and vintages of varied provenance were analysed using the new method. For reference and comparison purposes, 10 of the results obtained by the proposed method were compared with those acquired by the optimised Monier-Williams method. Overall, the results from the headspace analysis showed good correlation (R = 0.9985) when compared with the conventional method. This new method requires minimal sample preparation and is simple to perform, and the analysis can also be completed within a short period of time.
Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia
2017-01-24
Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.
Optimising the Inflammatory Bowel Disease Unit to Improve Quality of Care: Expert Recommendations
Dotan, Iris; Ghosh, Subrata; Mlynarsky, Liat; Reenaers, Catherine; Schreiber, Stefan
2015-01-01
Introduction: The best care setting for patients with inflammatory bowel disease [IBD] may be in a dedicated unit. Whereas not all gastroenterology units have the same resources to develop dedicated IBD facilities and services, there are steps that can be taken by any unit to optimise patients’ access to interdisciplinary expert care. A series of pragmatic recommendations relating to IBD unit optimisation have been developed through discussion among a large panel of international experts. Methods: Suggested recommendations were extracted through systematic search of published evidence and structured requests for expert opinion. Physicians [n = 238] identified as IBD specialists by publications or clinical focus on IBD were invited for discussion and recommendation modification [Barcelona, Spain; 2014]. Final recommendations were voted on by the group. Participants also completed an online survey to evaluate their own experience related to IBD units. Results: A total of 60% of attendees completed the survey, with 15% self-classifying their centre as a dedicated IBD unit. Only half of respondents indicated that they had a defined IBD treatment algorithm in place. Key recommendations included the need to develop a multidisciplinary team covering specifically-defined specialist expertise in IBD, to instil processes that facilitate cross-functional communication and to invest in shared care models of IBD management. Conclusions: Optimising the setup of IBD units will require progressive leadership and willingness to challenge the status quo in order to provide better quality of care for our patients. IBD units are an important step towards harmonising care for IBD across Europe and for establishing standards for disease management programmes. PMID:25987349
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
Calibrating reaction rates for the CREST model
NASA Astrophysics Data System (ADS)
Handley, Caroline A.; Christie, Michael A.
2017-01-01
The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.
Solving Fuzzy Fractional Differential Equations Using Zadeh's Extension Principle
Ahmad, M. Z.; Hasan, M. K.; Abbasbandy, S.
2013-01-01
We study a fuzzy fractional differential equation (FFDE) and present its solution using Zadeh's extension principle. The proposed study extends the case of fuzzy differential equations of integer order. We also propose a numerical method to approximate the solution of FFDEs. To solve nonlinear problems, the proposed numerical method is then incorporated into an unconstrained optimisation technique. Several numerical examples are provided. PMID:24082853
Yang, Lingjian; Ainali, Chrysanthi; Tsoka, Sophia; Papageorgiou, Lazaros G
2014-12-05
Applying machine learning methods on microarray gene expression profiles for disease classification problems is a popular method to derive biomarkers, i.e. sets of genes that can predict disease state or outcome. Traditional approaches where expression of genes were treated independently suffer from low prediction accuracy and difficulty of biological interpretation. Current research efforts focus on integrating information on protein interactions through biochemical pathway datasets with expression profiles to propose pathway-based classifiers that can enhance disease diagnosis and prognosis. As most of the pathway activity inference methods in literature are either unsupervised or applied on two-class datasets, there is good scope to address such limitations by proposing novel methodologies. A supervised multiclass pathway activity inference method using optimisation techniques is reported. For each pathway expression dataset, patterns of its constituent genes are summarised into one composite feature, termed pathway activity, and a novel mathematical programming model is proposed to infer this feature as a weighted linear summation of expression of its constituent genes. Gene weights are determined by the optimisation model, in a way that the resulting pathway activity has the optimal discriminative power with regards to disease phenotypes. Classification is then performed on the resulting low-dimensional pathway activity profile. The model was evaluated through a variety of published gene expression profiles that cover different types of disease. We show that not only does it improve classification accuracy, but it can also perform well in multiclass disease datasets, a limitation of other approaches from the literature. Desirable features of the model include the ability to control the maximum number of genes that may participate in determining pathway activity, which may be pre-specified by the user. Overall, this work highlights the potential of building pathway-based multi-phenotype classifiers for accurate disease diagnosis and prognosis problems.
Tat Trung, Ngo; Van Tong, Hoang; Lien, Tran Thi; Van Son, Trinh; Thanh Huyen, Tran Thi; Quyen, Dao Thanh; Hoan, Phan Quoc; Meyer, Christian G; Song, Le Huu
2018-02-01
For the identification of bacterial pathogens, blood culture is still the gold standard diagnostic method. However, several disadvantages apply to blood cultures, such as time and rather large volumes of blood sample required. We have previously established an optimised multiplex real-time PCR method in order to diagnose bloodstream infections. In the present study, we evaluated the diagnostic performance of this optimised multiplex RT-PCR in blood samples collected from 110 septicaemia patients enrolled at the 108 Military Central Hospital, Hanoi, Vietnam. Positive results were obtained by blood culture, the Light Cylcler-based SeptiFast ® assay and our multiplex RT-PCR in 35 (32%), 31 (28%), and 31 (28%) samples, respectively. Combined use of the three methods confirmed 50 (45.5%) positive cases of bloodstream infection, a rate significantly higher compared to the exclusive use of one of the three methods (P=0.052, 0.012 and 0.012, respectively). The sensitivity, specificity and area under the curve (AUC) of our assay were higher compared to that of the SeptiFast ® assay (77.4%, 86.1% and 0.8 vs. 67.7%, 82.3% and 0.73, respectively). Combined use of blood culture and multiplex RT-PCR assay showed a superior diagnostic performance, as the sensitivity, specificity, and AUC reached 83.3%, 100%, and 0.95, respectively. The concordance between blood culture and the multiplex RT-PCR assay was highest for Klebsiella pneumonia (100%), followed by Streptococcus spp. (77.8%), Escherichia coli (66.7%), Staphylococcus spp. (50%) and Salmonella spp. (50%). In addition, the use of the newly established multiplex RT-PCR assay increased the spectrum of identifiable agents (Acintobacter baumannii, 1/32; Proteus mirabilis, 1/32). The combination of culture and the multiplex RT-PCR assay provided an excellent diagnostic accomplishment and significantly supported the identification of causative pathogens in clinical samples obtained from septic patients. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Comot, Pierre
L'industrie aeronautique, cherche a etudier la possibilite d'utiliser de maniere structurelle des joints brases, dans une optique de reduction de poids et de cout. Le developpement d'une methode d'evaluation rapide, fiable et peu couteuse pour evaluer l'integrite structurelle des joints apparait donc indispensable. La resistance mecanique d'un joint brase dependant principalement de la quantite de phase fragile dans sa microstructure. Les ondes guidees ultrasonores permettent de detecter ce type de phase lorsqu'elles sont couplees a une mesure spatio-temporelle. De plus la nature de ce type d'ondes permet l'inspection de joints ayant des formes complexes. Ce memoire se concentre donc sur le developpement d'une technique basee sur l'utilisation d'ondes guidees ultrasonores pour l'inspection de joints brases a recouvrement d'Inconel 625 avec comme metal d'apport du BNi-2. Dans un premiers temps un modele elements finis du joint a ete utilise pour simuler la propagation des ultrasons et optimiser les parametres d'inspection, la simulation a permis egalement de demontrer la faisabilite de la technique pour la detection de la quantite de phase fragile dans ce type de joints. Les parametres optimises sont la forme de signal d'excitation, sa frequence centrale et la direction d'excitation. Les simulations ont montre que l'energie de l'onde ultrasonore transmise a travers le joint aussi bien que celle reflechie, toutes deux extraites des courbes de dispersion, etaient proportionnelles a la quantite de phase fragile presente dans le joint et donc cette methode permet d'identifier la presence ou non d'une phase fragile dans ce type de joint. Ensuite des experimentations ont ete menees sur trois echantillons typiques presentant differentes quantites de phase fragile dans le joint, pour obtenir ce type d'echantillons differents temps de brasage ont ete utilises (1, 60 et 180 min). Pour cela un banc d'essai automatise a ete developpe permettant d'effectuer une analyse similaire a celle utilisee en simulation. Les parametres experimentaux ayant ete choisis en accord avec l'optimisation effectuee lors des simulations et apres une premiere optimisation du procede experimental. Finalement les resultats experimentaux confirment les resultats obtenus en simulation, et demontrent le potentiel de la methode developpee.
Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.
Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D
2011-12-12
We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America
Wyllie, Aileen; DiGiacomo, Michelle; Jackson, Debra; Davidson, Patricia; Phillips, Jane
2016-10-01
To optimise the career development in early career academic nurses by providing an overview of the attributes necessary for success. Evidence of early prospective career planning is necessary to optimise success in the tertiary sector. This is particularly important for nurse academics given the profession's later entry into academia, the ageing nursing workforce and the continuing global shortage of nurses. A qualitative systematic review. Academic Search Complete, CINAHL, Medline, ERIC, Professional Development Collection and Google Scholar databases were searched; resulting in the inclusion of nine qualitative nurse-only focussed studies published between 2004 and 2014. The studies were critically appraised and the data thematically analysed. Three abilities were identified as important to the early career academic nurse: a willingness to adapt to change, an intention to pursue support and embodying resilience. These abilities give rise to attributes that are recommended as key to successful academic career development for those employed on a continuing academic basis. The capacity to rely on one's own capabilities is becoming seen as increasingly important. It is proposed that recognition of these attributes, their skilful application and monitoring outlined in the review are recommended for a successful career in academia. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
'Basically, it's sorcery for your vagina': unpacking Western representations of vaginal steaming.
Vandenburg, Tycho; Braun, Virginia
2017-04-01
Vaginal steaming made global headlines in 2015 after its promotion by celebrity Gwyneth Paltrow. One of many female genital modification practices currently on offer in Anglo-Western nations - practices both heavily promoted and critiqued - vaginal steaming is claimed to offer benefits for fertility and overall reproductive, sexual or even general health and wellbeing. We analysed a selection of online accounts of vaginal steaming to determine the sociocultural assumptions and logics within such discourse, including ideas about women, women's bodies and women's engagement with such 'modificatory' practices. Ninety items were carefully selected from the main types of website discussing vaginal steaming: news/magazines; health/lifestyle; spa/service providers; and personal blogs. Data were analysed using thematic analysis, within a constructionist framework that saw us focus on the constructions and rationalities that underpin the explicit content of the texts. Within an overarching theme of 'the self-improving woman' we identified four themes: (1) the naturally deteriorating, dirty female body; (2) contemporary life as harmful; (3) physical optimisation and the enhancement of health; and (4) vaginal steaming for life optimisation. Online accounts of vaginal steaming appear both to fit within historico-contemporary constructions of women's bodies as deficient and disgusting, and contemporary neoliberal and healthist discourse around the constantly improving subject.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Close packing in curved space by simulated annealing
NASA Astrophysics Data System (ADS)
Wille, L. T.
1987-12-01
The problem of packing spheres of a maximum radius on the surface of a four-dimensional hypersphere is considered. It is shown how near-optimal solutions can be obtained by packing soft spheres, modelled as classical particles interacting under an inverse power potential, followed by a subsequent hardening of the interaction. In order to avoid trapping in high-lying local minima, the simulated annealing method is used to optimise the soft-sphere packing. Several improvements over other work (based on local optimisation of random initial configurations of hard spheres) have been found. The freezing behaviour of this system is discussed as a function of particle number, softness of the potential and cooling rate. Apart from their geometric interest, these results are useful in the study of topological frustration, metallic glasses and quasicrystals.
Performance benchmark of LHCb code on state-of-the-art x86 architectures
NASA Astrophysics Data System (ADS)
Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.
2015-12-01
For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
Optimised cross-layer synchronisation schemes for wireless sensor networks
NASA Astrophysics Data System (ADS)
Nasri, Nejah; Ben Fradj, Awatef; Kachouri, Abdennaceur
2017-07-01
This paper aims at synchronisation between the sensor nodes. Indeed, in the context of wireless sensor networks, it is necessary to take into consideration the energy cost induced by the synchronisation, which can represent the majority of the energy consumed. On communication, an already identified hard point consists in imagining a fine synchronisation protocol which must be sufficiently robust to the intermittent energy in the sensors. Hence, this paper worked on aspects of performance and energy saving, in particular on the optimisation of the synchronisation protocol using cross-layer design method such as synchronisation between layers. Our approach consists in balancing the energy consumption between the sensors and choosing the cluster head with the highest residual energy in order to guarantee the reliability, integrity and continuity of communication (i.e. maximising the network lifetime).
ERIC Educational Resources Information Center
Mooij, Ton
2004-01-01
Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…
Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634
On the performances of different IMRT Treatment Planning Systems for selected paediatric cases.
Fogliata, Antonella; Nicolini, Giorgia; Alber, Markus; Asell, Mats; Clivio, Alessandro; Dobler, Barbara; Larsson, Malin; Lohr, Frank; Lorenz, Friedlieb; Muzik, Jan; Polednik, Martin; Vanetti, Eugenio; Wolff, Dirk; Wyttenbach, Rolf; Cozzi, Luca
2007-02-15
To evaluate the performance of seven different TPS (Treatment Planning Systems: Corvus, Eclipse, Hyperion, KonRad, Oncentra Masterplan, Pinnacle and PrecisePLAN) when intensity modulated (IMRT) plans are designed for paediatric tumours. Datasets (CT images and volumes of interest) of four patients were used to design IMRT plans. The tumour types were: one extraosseous, intrathoracic Ewing Sarcoma; one mediastinal Rhabdomyosarcoma; one metastatic Rhabdomyosarcoma of the anus; one Wilm's tumour of the left kidney with multiple liver metastases. Prescribed doses ranged from 18 to 54.4 Gy. To minimise variability, the same beam geometry and clinical goals were imposed on all systems for every patient. Results were analysed in terms of dose distributions and dose volume histograms. For all patients, IMRT plans lead to acceptable treatments in terms of conformal avoidance since most of the dose objectives for Organs At Risk (OARs) were met, and the Conformity Index (averaged over all TPS and patients) ranged from 1.14 to 1.58 on primary target volumes and from 1.07 to 1.37 on boost volumes. The healthy tissue involvement was measured in terms of several parameters, and the average mean dose ranged from 4.6 to 13.7 Gy. A global scoring method was developed to evaluate plans according to their degree of success in meeting dose objectives (lower scores are better than higher ones). For OARs the range of scores was between 0.75 +/- 0.15 (Eclipse) to 0.92 +/- 0.18 (Pinnacle(3) with physical optimisation). For target volumes, the score ranged from 0.05 +/- 0.05 (Pinnacle(3) with physical optimisation) to 0.16 +/- 0.07 (Corvus). A set of complex paediatric cases presented a variety of individual treatment planning challenges. Despite the large spread of results, inverse planning systems offer promising results for IMRT delivery, hence widening the treatment strategies for this very sensitive class of patients.
van Wyk, J A; Reynecke, D P
2011-05-11
This article is the first of a series aimed at developing specific decision support software for on-farm optimisation of sustainable integrated management of haemonchosis. It contains a concept framework for such a system for use by farmers and/or their advisors but, as reported in the series, only the first steps have been taken on the road to achieve this goal. Anthelmintic resistance has reached such levels of prevalence and intensity that recently it evoked the comment that for small ruminants the final phase of resistance was being entered, without effective chemotherapeutic agents on some farms with which to control worms at a level commensurate with profitable animal production. In addition, in the case of cattle, a recent survey in New Zealand showed 92% of worm populations to be resistant to at least one anthelmintic group. Ironically, new technology, such as the FAMACHA(©) system which was devised for sustainable management of haemonchosis, is at present being adopted relatively slowly by the majority of farmers and it is suggested that an important reason for this is the complexity of integration of new methods with epidemiological factors. The alternatives to the simple drenching programmes of the past are not only more difficult to manage, but are also more labour-intensive. The problem is further complicated by a progressive global shortage of persons with the necessary experience to train farmers in the new methods. The opinion is advanced that only computerised, automated decision support software can optimise the integration of the range of factors (such as rainfall, temperature, host age and reproductive status, pasture type, history of host and pasture infection, and anthelmintic formulation) for more sustainable worm management than is obtainable with present methods. Other than the conventional method (in which prospective analysis of laboratory and other data is mainly used to suggest when strategic prophylactic drenching of all animals for preventing excessive helminthosis should be conducted during the relevant worm season), the computer model being proposed is to be based on targeted selective treatment, supported by progressive periodic retrospective analysis of clinical data of a given worm season. It is emphasised that, in order not to repeat the mistakes of the past, such an automated support system should ideally be developed urgently in a attempt to engineer greater sustainability of any unrelated new anthelmintics which may reach the market. Copyright © 2009 Elsevier B.V. All rights reserved.
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Restivo, Annalaura; Degano, Ilaria; Ribechini, Erika; Colombini, Maria Perla
2014-01-01
A method for the HPLC-MS/MS analysis of phenols, including phenolic acids and naphtoquinones, using an amide-embedded phase column was developed and compared to the literature methods based on classical C18 stationary phase columns. RP-Amide is a recently developed polar embedded stationary phase, whose wetting properties mean that up to 100% water can be used as an eluent. The increased retention and selectivity for polar compounds and the possibility of working in 100% water conditions make this column particularly interesting for the HPLC analysis of phenolic acids and derivatives. In this study, the chromatographic separation was optimised on an HPLC-DAD, and was used to separate 13 standard phenolic acids and derivatives. The method was validated on an HPLC-ESI-Q-ToF. The acquisition was performed in negative polarity and MS/MS target mode. Ionisation conditions and acquisition parameters for the Q-ToF detector were investigated by working on collision energies and fragmentor potentials. The performance of the method was fully evaluated on standards. Moreover, several raw materials containing phenols were analysed: walnut, gall, wine, malbec grape, French oak, red henna and propolis. Our method allowed us to characterize the phenolic composition in a wide range of matrices and to highlight possible matrix effects. PMID:24551158
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Distributed optimisation problem with communication delay and external disturbance
NASA Astrophysics Data System (ADS)
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
Medicines optimisation: priorities and challenges.
Kaufman, Gerri
2016-03-23
Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.
Optimising Web Site Designs for People with Learning Disabilities
ERIC Educational Resources Information Center
Williams, Peter; Hennig, Christian
2015-01-01
Much relevant internet-mediated information is inaccessible to people with learning disabilities because of difficulties in navigating the web. This paper reports on the methods undertaken to determine how information can be optimally presented for this cohort. Qualitative work is outlined where attributes relating to site layout affecting…
B and V photometry and analysis of the eclipsing binary RZ CAS
NASA Astrophysics Data System (ADS)
Riazi, N.; Bagheri, M. R.; Faghihi, F.
1994-01-01
Photoelectric light curves of the eclipsing binary RZ Cas are presented for B and V filters. The light curves are analyzed for light and geometrical elements, starting with a previously suggested preliminary method. The approximate results thus obtained are then optimised through the Wilson-Devinney computer programs.
Abd El Razak, Ahmed; Ward, Alan C; Glassey, Jarka
2014-02-01
Water samples from three different environments including Mid Atlantic Ridge, Red Sea and Mediterranean Sea were screened in order to isolate new polyunsaturated fatty acids (PUFAs) bacterial producers especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Two hundred and fifty-one isolates were screened for PUFA production and among them the highest number of producers was isolated from the Mid-Atlantic Ridge followed by the Red Sea while no producers were found in the Mediterranean Sea samples. The screening strategy included a simple colourimetric method followed by a confirmation via GC/MS. Among the tested producers, an isolate named 66 was found to be a potentially high PUFA producer producing relatively high levels of EPA in particular. A Plackett-Burman statistical design of experiments was applied to screen a wide number of media components identifying glycerol and whey as components of a production medium. The potential low-cost production medium was optimised by applying a response surface methodology to obtain the highest productivity converting industrial by-products into value-added products. The maximum achieved productivity of EPA was 20 mg/g, 45 mg/l, representing 11% of the total fatty acids, which is approximately five times more than the amount produced prior to optimisation. The production medium composition was 10.79 g/l whey and 6.87 g/l glycerol. To our knowledge, this is the first investigation of potential bacteria PUFA producers from Mediterranean and Red Seas providing an evaluation of a colourimetric screening method as means of rapid screening of a large number of isolates.
Li, Shasha; Liu, Xingang; Zhu, Yulong; Dong, Fengshou; Xu, Jun; Li, Minmin; Zheng, Yongquan
2014-09-05
An effective method for the quantification of fluxapyroxad and its three metabolites in soils, sediment and sludge was developed using ultrahigh performance chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS). Both the extraction and clean-up steps of the QuEChERS procedure were optimised using a chemometric tool, which was expected to facilitate the rapid analysis with minimal procedures. Several operating parameters (MeCN/acetic acid ratio in the extraction solution (i.e., acetic acid percentage), water volume, extraction time, PSA amount, C18 amount, and GCB amount) were investigated using a Plackett-Burman (P-B) screening design. Afterward, the significant factors (acetic acid percentage, water volume, and PSA amount) obtained were optimised using central composite design (CCD) combined with the desirability function (DF) to determine the optimum experimental conditions. The optimised procedure provides high-level linearity for all studied compounds with correlation coefficients ranging between 0.9972 and 0.9999. The detection limits were in the range of 0.1 to 1.0μg/kg and the limits of quantitation (LOQs) were between 0.5 and 3.4μg/kg with relative standard deviations (RSD) between 2.3% and 9.6% (n=6). Therefore, the developed protocol can serve as a simple and sensitive tool for monitoring fluxapyroxad and its three metabolites in soil, sediment and sludge samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Medellín-Azuara, Josué; Harou, Julien J; Howitt, Richard E
2010-11-01
Given the high proportion of water used for agriculture in certain regions, the economic value of agricultural water can be an important tool for water management and policy development. This value is quantified using economic demand curves for irrigation water. Such demand functions show the incremental contribution of water to agricultural production. Water demand curves are estimated using econometric or optimisation techniques. Calibrated agricultural optimisation models allow the derivation of demand curves using smaller datasets than econometric models. This paper introduces these subject areas then explores the effect of spatial aggregation (upscaling) on the valuation of water for irrigated agriculture. A case study from the Rio Grande-Rio Bravo Basin in North Mexico investigates differences in valuation at farm and regional aggregated levels under four scenarios: technological change, warm-dry climate change, changes in agricultural commodity prices, and water costs for agriculture. The scenarios consider changes due to external shocks or new policies. Positive mathematical programming (PMP), a calibrated optimisation method, is the deductive valuation method used. An exponential cost function is compared to the quadratic cost functions typically used in PMP. Results indicate that the economic value of water at the farm level and the regionally aggregated level are similar, but that the variability and distributional effects of each scenario are affected by aggregation. Moderately aggregated agricultural production models are effective at capturing average-farm adaptation to policy changes and external shocks. Farm-level models best reveal the distribution of scenario impacts. Copyright © 2009 Elsevier B.V. All rights reserved.
Aveling, Emma-Louise; Martin, Graham; Jiménez García, Senai; Martin, Lisa; Herbert, Georgia; Armstrong, Natalie; Dixon-Woods, Mary; Woolhouse, Ian
2012-12-01
Peer review offers a promising way of promoting improvement in health systems, but the optimal model is not yet clear. We aimed to describe a specific peer review model-reciprocal peer-to-peer review (RP2PR)-to identify the features that appeared to support optimal functioning. We conducted an ethnographic study involving observations, interviews and documentary analysis of the Improving Lung Cancer Outcomes Project, which involved 30 paired multidisciplinary lung cancer teams participating in facilitated reciprocal site visits. Analysis was based on the constant comparative method. Fundamental features of the model include multidisciplinary participation, a focus on discussion and observation of teams in action, rather than paperwork; facilitated reflection and discussion on data and observations; support to develop focused improvement plans. Five key features were identified as important in optimising this model: peers and pairing methods; minimising logistic burden; structure of visits; independent facilitation; and credibility of the process. Facilitated RP2PR was generally a positive experience for participants, but implementing improvement plans was challenging and required substantial support. RP2PR appears to be optimised when it is well organised; a safe environment for learning is created; credibility is maximised; implementation and impact are supported. RP2PR is seen as credible and legitimate by lung cancer teams and can act as a powerful stimulus to produce focused quality improvement plans and to support implementation. Our findings have identified how RP2PR functioned and may be optimised to provide a constructive, open space for identifying opportunities for improvement and solutions.
Cheng, Yu-Huei
2014-12-01
Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.
Impact of optimised cooking on the antioxidant activity in edible mushrooms.
Ng, Zhi Xiang; Tan, Wan Chein
2017-11-01
This study aimed to investigate the effect of four cooking methods with different durations on the in vitro antioxidant activities of five edible mushrooms, namely Agaricus bisporus , Flammulina velutipes , Lentinula edodes , Pleurotus ostreatus and Pleurotus eryngii. Among the raw samples, A. bisporus showed the highest total antioxidant activity (reducing power and radical scavenging), total flavonoid, ascorbic acid and water soluble phenolic contents. Short-duration steam cooking (3 min) increased the total flavonoid and ascorbic acid while prolonged pressure cooking (15 min) reduced the water soluble phenolic content in the mushrooms. The retention of antioxidant value in the mushrooms varied with the variety of mushroom after the cooking process. The cooking duration significantly affected the ascorbic acid in the mushrooms regardless of cooking method. To achieve the best antioxidant values, steam cooking was preferred for F. velutipes (1.5 min), P. ostreatus (4.5 min) and L. edodes (4.5 min) while microwave cooking for 1.5 min was a better choice for A. bisporus . Pressure cooked P. eryngii showed the best overall antioxidant value among the cooked samples. Optimised cooking method including pressure cooking could increase the antioxidant values in the edible mushrooms.
NASA Astrophysics Data System (ADS)
Kolyaie, S.; Yaghooti, M.; Majidi, G.
2011-12-01
This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.
Vibration isolation design for periodically stiffened shells by the wave finite element method
NASA Astrophysics Data System (ADS)
Hong, Jie; He, Xueqing; Zhang, Dayi; Zhang, Bing; Ma, Yanhong
2018-04-01
Periodically stiffened shell structures are widely used due to their excellent specific strength, in particular for aeronautical and astronautical components. This paper presents an improved Wave Finite Element Method (FEM) that can be employed to predict the band-gap characteristics of stiffened shell structures efficiently. An aero-engine casing, which is a typical periodically stiffened shell structure, was employed to verify the validation and efficiency of the Wave FEM. Good agreement has been found between the Wave FEM and the classical FEM for different boundary conditions. One effective wave selection method based on the Wave FEM has thus been put forward to filter the radial modes of a shell structure. Furthermore, an optimisation strategy by the combination of the Wave FEM and genetic algorithm was presented for periodically stiffened shell structures. The optimal out-of-plane band gap and the mass of the whole structure can be achieved by the optimisation strategy under an aerodynamic load. Results also indicate that geometric parameters of stiffeners can be properly selected that the out-of-plane vibration attenuates significantly in the frequency band of interest. This study can provide valuable references for designing the band gaps of vibration isolation.
Optimisation of olive oil phenol extraction conditions using a high-power probe ultrasonication.
Jerman Klen, T; Mozetič Vodopivec, B
2012-10-15
A new method of ultrasound probe assisted liquid-liquid extraction (US-LLE) combined with a freeze-based fat precipitation clean-up and HPLC-DAD-FLD-MS detection is described for extra virgin olive oil (EVOO) phenol analysis. Three extraction variables (solvent type; 100%, 80%, 50% methanol, sonication time; 5, 10, 20 min, extraction steps; 1-5) and two clean-up methods (n-hexane washing vs. low temperature fat precipitation) were studied and optimised with aim to maximise extracts' phenol recoveries. A three-step extraction of 10 min with pure methanol (5 mL) resulted in the highest phenol content of freeze-based defatted extracts (667 μg GAE g(-1)) from 10 g of EVOO, providing much higher efficiency (up to 68%) and repeatability (up to 51%) vs. its non-sonicated counterpart (LLE-agitation) and n-hexane washing. In addition, the overall method provided high linearity (r(2)≥0.97), precision (RSD: 0.4-9.3%) and sensitivity with LODs/LOQs ranging from 0.03 to 0.16 μg g(-1) and 0.10-0.51 μg g(-1) of EVOO, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.
Optimising sulfuric acid hard coat anodising for an Al-Mg-Si wrought aluminium alloy
NASA Astrophysics Data System (ADS)
Bartolo, N.; Sinagra, E.; Mallia, B.
2014-06-01
This research evaluates the effects of sulfuric acid hard coat anodising parameters, such as acid concentration, electrolyte temperature, current density and time, on the hardness and thickness of the resultant anodised layers. A small scale anodising facility was designed and set up to enable experimental investigation of the anodising parameters. An experimental design using the Taguchi method to optimise the parameters within an established operating window was performed. Qualitative and quantitative methods of characterisation of the resultant anodised layers were carried out. The anodised layer's thickness, and morphology were determined using a light optical microscope (LOM) and field emission gun scanning electron microscope (FEG-SEM). Hardness measurements were carried out using a nano hardness tester. Correlations between the various anodising parameters and their effect on the hardness and thickness of the anodised layers were established. Careful evaluation of these effects enabled optimum parameters to be determined using the Taguchi method, which were verified experimentally. Anodised layers having hardness varying between 2.4-5.2 GPa and a thickness of between 20-80 μm were produced. The Taguchi method was shown to be applicable to anodising. This finding could facilitate on-going and future research and development of anodising, which is attracting remarkable academic and industrial interest.
Naeemullah; Kazi, Tasneem Gul; Tuzen, Mustafa
2015-04-01
A new dispersive liquid-liquid microextraction, magnetic stirrer induced dispersive ionic-liquid microextraction (MS-IL-DLLME) was developed to quantify the trace level of vanadium in real water and food samples by graphite furnace atomic absorption spectrometry (GFAAS). In this extraction method magnetic stirrer was applied to obtained a dispersive medium of 1-butyl-3-methylimidazolium hexafluorophosphate [C4MIM][PF6] in aqueous solution of (real water samples and digested food samples) to increase phase transfer ratio, which significantly enhance the recovery of vanadium - 4-(2-pyridylazo) resorcinol (PAR) chelate. Variables having vital role on desired microextraction methods were optimised to obtain the maximum recovery of study analyte. Under the optimised experimental variables, enhancement factor (EF) and limit of detection (LOD) were achieved to be 125 and 18 ng L(-1), respectively. Validity and accuracy of the desired method was checked by analysis of certified reference materials (SLRS-4 Riverine water and NIST SRM 1515 Apple leaves). The relative standard deviation (RSD) for 10 replicate determinations at 0.5 μg L(-1) of vanadium level was found to be <5.0%. This method was successfully applied to real water and acid digested food samples. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pure random search for ambient sensor distribution optimisation in a smart home environment.
Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming
2011-01-01
Smart homes are living spaces facilitated with technology to allow individuals to remain in their own homes for longer, rather than be institutionalised. Sensors are the fundamental physical layer with any smart home, as the data they generate is used to inform decision support systems, facilitating appropriate actuator actions. Positioning of sensors is therefore a fundamental characteristic of a smart home. Contemporary smart home sensor distribution is aligned to either a) a total coverage approach; b) a human assessment approach. These methods for sensor arrangement are not data driven strategies, are unempirical and frequently irrational. This Study hypothesised that sensor deployment directed by an optimisation method that utilises inhabitants' spatial frequency data as the search space, would produce more optimal sensor distributions vs. the current method of sensor deployment by engineers. Seven human engineers were tasked to create sensor distributions based on perceived utility for 9 deployment scenarios. A Pure Random Search (PRS) algorithm was then tasked to create matched sensor distributions. The PRS method produced superior distributions in 98.4% of test cases (n=64) against human engineer instructed deployments when the engineers had no access to the spatial frequency data, and in 92.0% of test cases (n=64) when engineers had full access to these data. These results thus confirmed the hypothesis.
A support vector machine for predicting defibrillation outcomes from waveform metrics.
Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J
2014-03-01
Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A study of lateral fall-off (penumbra) optimisation for pencil beam scanning (PBS) proton therapy
NASA Astrophysics Data System (ADS)
Winterhalter, C.; Lomax, A.; Oxley, D.; Weber, D. C.; Safai, S.
2018-01-01
The lateral fall-off is crucial for sparing organs at risk in proton therapy. It is therefore of high importance to minimize the penumbra for pencil beam scanning (PBS). Three optimisation approaches are investigated: edge-collimated uniformly weighted spots (collimation), pencil beam optimisation of uncollimated pencil beams (edge-enhancement) and the optimisation of edge collimated pencil beams (collimated edge-enhancement). To deliver energies below 70 MeV, these strategies are evaluated in combination with the following pre-absorber methods: field specific fixed thickness pre-absorption (fixed), range specific, fixed thickness pre-absorption (automatic) and range specific, variable thickness pre-absorption (variable). All techniques are evaluated by Monte Carlo simulated square fields in a water tank. For a typical air gap of 10 cm, without pre-absorber collimation reduces the penumbra only for water equivalent ranges between 4-11 cm by up to 2.2 mm. The sharpest lateral fall-off is achieved through collimated edge-enhancement, which lowers the penumbra down to 2.8 mm. When using a pre-absorber, the sharpest fall-offs are obtained when combining collimated edge-enhancement with a variable pre-absorber. For edge-enhancement and large air gaps, it is crucial to minimize the amount of material in the beam. For small air gaps however, the superior phase space of higher energetic beams can be employed when more material is used. In conclusion, collimated edge-enhancement combined with the variable pre-absorber is the recommended setting to minimize the lateral penumbra for PBS. Without collimator, it would be favourable to use a variable pre-absorber for large air gaps and an automatic pre-absorber for small air gaps.
Cheong, Vee San; Bull, Anthony M J
2015-12-16
The choice of coordinate system and alignment of bone will affect the quantification of mechanical properties obtained during in-vitro biomechanical testing. Where these are used in predictive models, such as finite element analysis, the fidelic description of these properties is paramount. Currently in bending and torsional tests, bones are aligned on a pre-defined fixed span based on the reference system marked out. However, large inter-specimen differences have been reported. This suggests a need for the development of a specimen-specific alignment system for use in experimental work. Eleven ovine tibiae were used in this study and three-dimensional surface meshes were constructed from micro-Computed Tomography scan images. A novel, semi-automated algorithm was developed and applied to the surface meshes to align the whole bone based on its calculated principal directions. Thereafter, the code isolates the optimised location and length of each bone for experimental testing. This resulted in a lowering of the second moment of area about the chosen bending axis in the central region. More importantly, the optimisation method decreases the irregularity of the shape of the cross-sectional slices as the unbiased estimate of the population coefficient of variation of the second moment of area decreased from a range of (0.210-0.435) to (0.145-0.317) in the longitudinal direction, indicating a minimisation of the product moment, which causes eccentric loading. Thus, this methodology serves as an important pre-step to align the bone for mechanical tests or simulation work, is optimised for each specimen, ensures repeatability, and is general enough to be applied to any long bone. Copyright © 2015 Elsevier Ltd. All rights reserved.
On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions
NASA Astrophysics Data System (ADS)
Gonzalo, J.; Domínguez, D.; López, D.
2014-12-01
From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.
NASA Astrophysics Data System (ADS)
Sridhar, R.; Jeevananthan, S.; Dash, S. S.; Vishnuram, Pradeep
2017-05-01
Maximum Power Point Trackers (MPPTs) are power electronic conditioners used in photovoltaic (PV) system to ensure that PV structures feed maximum power for the given ambient temperature and sun's irradiation. When the PV panels are shaded by a fraction due to any environment hindrances then, conventional MPPT trackers may fail in tracking the appropriate peak power as there will be multi power peaks. In this work, a shuffled frog leap algorithm (SFLA) is proposed and it successfully identifies the global maximum power point among other local maxima. The SFLA MPPT is compared with a well-entrenched conventional perturb and observe (P&O) MPPT algorithm and a global search particle swarm optimisation (PSO) MPPT. The simulation results reveal that the proposed algorithm is highly advantageous than P&O, as it tracks nearly 30% more power for a given shading pattern. The credible nature of the proposed SFLA is ensured when it outplays PSO MPPT in convergence. The whole system is realised in MATLAB/Simulink environment.
How the nursing profession can contribute to sustainable development goals.
Benton, David; Shaffer, Franklin
2016-11-01
As of 1 January 2016, the UN sustainable development goals (SDGs) became the focus of global efforts on a wide range of development agenda. The SDGs have subsumed the work of the UN millennium development goals (MDGs), so it is timely to reflect on the contribution made by nurses and midwives, so that we can optimise the profession's contribution to the 17 SDGs. This article reports the results of a scientometrics analysis of the published literature related to the MDGs and SDGs indexed in CINAHL, which identified the underlying themes addressed by nurses and midwives. It shows how analysis demonstrates that although nursing was slow to engage with the MDG agenda, it has made some progress in contributing to SDG scholarship. So far this contribution has been narrowly focused, but the profession could contribute to all 17 of the SDG goals. Routine updates of the analysis described here could help monitor progress, identify gaps in nursing's contributions to the goals, and provide further impetus to its engagement in this major global policy initiative.
NASA Astrophysics Data System (ADS)
Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç
2017-10-01
This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.
Rigby, M
2012-01-01
To define and assess 'Consumer Health Informatics' and related emergent issues in an era of new media and of personalisation of care, and from this to define what actions need to be taken to optimise benefits and address risks. Definition of key concepts; review of health personalisation, emergent health information and communication technologies and knowledge sources available to citizens and social media; and identification of unresolved issues threatening optimal use of each. A structured review supported by citations and examples. Several new aspects of consumer health informatics are emerging, including new knowledge sources, feedback on treatments and care providers, on-line videos, and a new generation of patient experience sites including those which are for profit and seek to influence treatment paradigms. Not just the information usage, but also the potential social challenges and malicious abuses, are global issues, and also transcend the traditional health community and thus should be addressed in partnership with other global agencies.
Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole
2017-01-01
Objective To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Materials and methods Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006–2007. Diet cost was estimated from mean national food prices (2006–2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. Results The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. Conclusions In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d. PMID:28358837
Better powder diffractometers. II—Optimal choice of U, V and W
NASA Astrophysics Data System (ADS)
Cussen, L. D.
2007-12-01
This article presents a technique for optimising constant wavelength (CW) neutron powder diffractometers (NPDs) using conventional nonlinear least squares methods. This is believed to be the first such design optimisation for a neutron spectrometer. The validity of this approach and discussion should extend beyond the Gaussian element approximation used and also to instruments using different radiation, such as X-rays. This approach could later be extended to include vertical and perhaps horizontal focusing monochromators and probably other types of instruments such as three axis spectrometers. It is hoped that this approach will help in comparisons of CW and time-of-flight (TOF) instruments. Recent work showed that many different beam element combinations can give identical resolution on CW NPDs and presented a procedure to find these combinations and also find an "optimum" choice of detector collimation. Those results enable the previous redundancy in the description of instrument performance to be removed and permit a least squares optimisation of design. New inputs are needed and are identified as the sample plane spacing ( dS) of interest in the measurement. The optimisation requires a "quality factor", QPD, chosen here to be minimising the worst Bragg peak separation ability over some measurement range ( dS) while maintaining intensity. Any other QPD desired could be substituted. It is argued that high resolution and high intensity powder diffractometers (HRPDs and HIPDs) should have similar designs adjusted by a single scaling factor. Simulated comparisons are described suggesting significant improvements in performance for CW HIPDs. Optimisation with unchanged wavelength suggests improvements by factors of about 2 for HRPDs and 25 for HIPDs. A recently quantified design trade-off between the maximum line intensity possible and the degree of variation of angular resolution over the scattering angle range leads to efficiency gains at short wavelengths. This in turn leads in practice to another trade-off between this efficiency gain and losses at short wavelength due to technical effects. The exact gains from varying wavelength depend on the details of the short wavelength technical losses. Simulations suggest that the total potential PD performance gains may be very significant-factors of about 3 for HRPDs and more than 90 for HIPDs.
Airfoil Shape Optimization based on Surrogate Model
NASA Astrophysics Data System (ADS)
Mukesh, R.; Lingadurai, K.; Selvakumar, U.
2018-02-01
Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.
NASA Astrophysics Data System (ADS)
Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.
2017-10-01
ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.
A new paradigm in personal dosimetry using LiF:Mg,Cu,P.
Cassata, J R; Moscovitch, M; Rotunda, J E; Velbeck, K J
2002-01-01
The United States Navy has been monitoring personnel for occupational exposure to ionising radiation since 1947. Film was exclusively used until 1973 when thermoluminescence dosemeters were introduced and used to the present time. In 1994, a joint research project between the Naval Dosimetry Center, Georgetown University, and Saint Gobain Crystals and Detectors (formerly Bicron RMP formerly Harshaw TLD) began to develop a state of the art thermoluminescent dosimetry system. The study was conducted from a large-scale dosimetry processor point of view with emphasis on a systems approach. Significant improvements were achieved by replacing the LiF:Mg,Ti with LiF:Mg,Cu,P TL elements due to the significant sensitivity increase, linearity, and negligible hiding. Dosemeter filters were optimised for gamma and X ray energy discrimination using Monte Carlo modelling (MCNP) resulting in significant improvement in accuracy and precision. Further improvements were achieved through the use of neural-network based dose calculation algorithms. Both back propagation and functional link methods were implemented and the data compared with essentially the same results. Several operational aspects of the system are discussed, including (1) background subtraction using control dosemeters, (2) selection criteria for control dosemeters, (3) optimisation of the TLD readers, (4) calibration methodology, and (5) the optimisation of the heating profile.
An improved PSO-SVM model for online recognition defects in eddy current testing
NASA Astrophysics Data System (ADS)
Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin
2013-12-01
Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.
Sabater, Carlos; Corzo, Nieves; Olano, Agustín; Montilla, Antonia
2018-06-15
The aim of this study was to optimise pectin extraction from artichoke by-products with Celluclast ® 1.5L using an experimental design analysed by response-surface methodology (RSM). The variables optimised were artichoke by-product powder concentration (2-7%, X 1 ), enzyme dose (2.2-13.3 U g -1 , X 2 ) and extraction time (6-24 h, X 3 ). The variables studied were galacturonic acid (GalA) (R 2 93.9) and pectic neutral sugars (R 2 92.8) content and pectin yield (R 2 88.6). In the optimum extraction conditions (X 1 = 6.5%; X 2 = 10.1 U g -1 ; X 3 = 27.2 h), pectin yield was 176 mgg -1 dry matter (DM). Considering 27.2 h of treatment as the +α value given by the design, the extraction time was increased up to 48 h obtaining a yield of 221 mg g -1 DM. The enzymatic method optimised allows obtaining artichoke pectin with good yield, high GalA (720 mg g -1 DM) and arabinose (127.6mgg -1 DM) contents and degree of methylation of 19.5%. Copyright © 2018 Elsevier Ltd. All rights reserved.
Oladejo, Ayobami Olayemi; Ma, Haile
2016-08-01
Sweet potato is a highly nutritious tuber crop that is rich in β-carotene. Osmotic dehydration is a pretreatment method for drying of fruit and vegetables. Recently, ultrasound technology has been applied in food processing because of its numerous advantages which include time saving, little damage to the quality of the food. Thus, there is need to investigate and optimise the process parameters [frequency (20-50 kHz), time (10-30 min) and sucrose concentration (20-60% w/v)] for ultrasound-assisted osmotic dehydration of sweet potato using response surface methodology. The optimised values obtained were frequency of 33.93 kHz, time of 30 min and sucrose concentration of 35.69% (w/v) to give predicted values of 21.62, 4.40 and 17.23% for water loss, solid gain and weight reduction, respectively. The water loss and weight reduction increased when the ultrasound frequency increased from 20 to 35 kHz and then decreased as the frequency increased from 35 to 50 kHz. The results from this work show that low ultrasound frequency favours the osmotic dehydration of sweet potato and also reduces the use of raw material (sucrose) needed for the osmotic dehydration of sweet potato. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Dong, Xu-Yan; Kong, Fan-Pi; Yuan, Gang-You; Wei, Fang; Jiang, Mu-Lan; Li, Guang-Ming; Wang, Zhan; Zhao, Yuan-Di; Chen, Hong
2012-01-01
Phytosterol liposomes were prepared using the thin film method and used to encapsulate nattokinase (NK). In order to obtain a high encapsulation efficiency within the liposome, an orthogonal experiment (L9 (3)(4)) was applied to optimise the preparation conditions. The molar ratio of lecithin to phytosterols, NK activity and mass ratio of mannite to lecithin were the main factors that influenced the encapsulation efficiency of the liposomes. Based on the results of a single-factor test, these three factors were chosen for this study. We determined the optimum extraction conditions to be as follows: a molar ratio of lecithin to phytosterol of 2 : 1, NK activity of 2500 U mL⁻¹ and a mass ratio of mannite to lecithin of 3 : 1. Under these optimised conditions, an encapsulation efficiency of 65.25% was achieved, which agreed closely with the predicted result. Moreover, the zeta potential, size distribution and microstructure of the liposomes prepared were measured, and we found that the zeta potential was -51 ± 3 mV and the mean diameter was 194.1 nm. From the results of the scanning electron microscopy, we observed that the phytosterol liposomes were round and regular in shape and showed no aggregation.
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System
Bhudolia, Somen K.; Perrotey, Pavel; Joshi, Sunil C.
2017-01-01
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium®. Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented. PMID:28772654
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System.
Bhudolia, Somen K; Perrotey, Pavel; Joshi, Sunil C
2017-03-15
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium ® . Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented.
NASA Astrophysics Data System (ADS)
Hurford, Anthony; Harou, Julien
2015-04-01
Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.
Peters, Sonja; Kaal, Erwin; Horsting, Iwan; Janssen, Hans-Gerd
2012-02-24
A new method is presented for the analysis of phenolic acids in plasma based on ion-pairing 'Micro-extraction in packed sorbent' (MEPS) coupled on-line to in-liner derivatisation-gas chromatography-mass spectrometry (GC-MS). The ion-pairing reagent served a dual purpose. It was used both to improve extraction yields of the more polar analytes and as the methyl donor in the automated in-liner derivatisation method. In this way, a fully automated procedure for the extraction, derivatisation and injection of a wide range of phenolic acids in plasma samples has been obtained. An extensive optimisation of the extraction and derivatisation procedure has been performed. The entire method showed excellent repeatabilities of under 10% and linearities of 0.99 or better for all phenolic acids. The limits of detection of the optimised method for the majority of phenolic acids were 10ng/mL or lower with three phenolic acids having less-favourable detection limits of around 100 ng/mL. Finally, the newly developed method has been applied in a human intervention trial in which the bioavailability of polyphenols from wine and tea was studied. Forty plasma samples could be analysed within 24h in a fully automated method including sample extraction, derivatisation and gas chromatographic analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Yunjun; Remeikas, Charles; Pham, Khanh
2014-03-01
Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.
MO-DE-204-03: Radiology Dose Optimisation - An Australian Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schick, D.
2016-06-15
The main topic of the session is to show how dose optimization is being implemented in various regions of the world, including Europe, Australia, North America and other regions. A multi-national study conducted under International Atomic Energy Agency (IAEA) across more than 50 less resourced countries gave insight into patient radiation doses and safety practices in CT, mammography, radiography and interventional procedures, both for children and adults. An important outcome was the capability development on dose assessment and management. An overview of recent European projects related to CT radiation dose and optimization both to adults and children will be presented.more » Existing data on DRLs together with a European methodology proposed on establishing and using DRLs for paediatric radiodiagnostic imaging and interventional radiology practices will be shown. Compared with much of Europe at least, many Australian imaging practices are relatively new to the task of diagnostic imaging dose optimisation. In 2008 the Australian Government prescribed a requirement to periodically compare patient radiation doses with diagnostic reference levels (DRLs), where DRLs have been established. Until recently, Australia had only established DRLs for computed tomography (CT). Regardless, both professional society and individual efforts to improved data collection and develop optimisation strategies across a range of modalities continues. Progress in this field, principally with respect to CT and interventional fluoroscopy will be presented. In the US, dose reduction and optimization efforts for computed tomography have been promoted and mandated by several organizations and accrediting entities. This presentation will cover the general motivation, implementation, and implications of such efforts. Learning Objectives: Understand importance of the dose optimization in Diagnostic Radiology. See how this goal is achieved in different regions of the World. Learn about the global trend in the dose optimization and future prospectives. M. Rehani, The work was a part of the work of IAEA where I was an employee and IAEA is a United Nations organization.« less
Rodríguez, Pablo Fernández; Marchante-Gayón, Juan Manuel; Sanz-Medel, Alfredo
2006-01-15
Ultrasonic slurry sampling electrothermal vaporisation inductively coupled plasma mass spectrometry (USS-ETV-ICP-MS) was applied to the elemental analysis of silicate based minerals, such as talc or quartz, without any pre-treatment except the grinding of the sample. The electrothermal vaporisation device consists of a tungsten coil connected to a home-made power supply. The voltage program, carrier gas flow rate and sonication time were optimised in order to obtain the best sensitivity for elements determined. The relationship between the amount of sample in the slurry and the signal intensity was also evaluated. Unfortunately, in all cases, quantification had to be carried out by the standard additions method owing to the strong matrix interferences. The global precision of the proposed method was always better than 12%. The limits of detection, calculated as three times the standard deviation of the blank value divided by the slope of the calibration curve, were between 0.5 ng/g for As and 3.5 ng/g for Ba. The method was validated by comparing the concentrations found for Cu, Mn, Cr, V, Li, Pb, Sn, Mg, U, Ba, Sr, Zn, Sb, Rb and Ce using the proposed methodology with those obtained by conventional nebulisation ICP-MS after acid digestion of the samples in a microwave oven. The concentration range in the solid samples was between 0.2 microg/g for Cr and 60 microg/g for Ba. All results were statistically in agreement with those found by conventional nebulisation.
From staff-mix to skill-mix and beyond: towards a systemic approach to health workforce management
2009-01-01
Throughout the world, countries are experiencing shortages of health care workers. Policy-makers and system managers have developed a range of methods and initiatives to optimise the available workforce and achieve the right number and mix of personnel needed to provide high-quality care. Our literature review found that such initiatives often focus more on staff types than on staff members' skills and the effective use of those skills. Our review describes evidence about the benefits and pitfalls of current approaches to human resources optimisation in health care. We conclude that in order to use human resources most effectively, health care organisations must consider a more systemic approach - one that accounts for factors beyond narrowly defined human resources management practices and includes organisational and institutional conditions. PMID:20021682
Principles of Experimental Design for Big Data Analysis.
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2017-08-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.
Principles of Experimental Design for Big Data Analysis
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2016-01-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686
Operational modes, health, and status monitoring
NASA Astrophysics Data System (ADS)
Taljaard, Corrie
2016-08-01
System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.
Thermal buckling optimisation of composite plates using firefly algorithm
NASA Astrophysics Data System (ADS)
Kamarian, S.; Shakeri, M.; Yas, M. H.
2017-07-01
Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.
Distributed convex optimisation with event-triggered communication in networked systems
NASA Astrophysics Data System (ADS)
Liu, Jiayun; Chen, Weisheng
2016-12-01
This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
Optimisation of active suspension control inputs for improved vehicle handling performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor
2016-11-01
Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.
Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks
2015-04-01
UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace
Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring
NASA Astrophysics Data System (ADS)
Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.
2014-12-01
Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.
Optimisation of intradermal DNA electrotransfer for immunisation.
Vandermeulen, Gaëlle; Staes, Edith; Vanderhaeghen, Marie Lise; Bureau, Michel Francis; Scherman, Daniel; Préat, Véronique
2007-12-04
The development of DNA vaccines requires appropriate delivery technologies. Electrotransfer is one of the most efficient methods of non-viral gene transfer. In the present study, intradermal DNA electrotransfer was first optimised. Strong effects of the injection method and the dose of DNA on luciferase expression were demonstrated. Pre-treatments were evaluated to enhance DNA diffusion in the skin but neither hyaluronidase injection nor iontophoresis improved efficiency of intradermal DNA electrotransfer. Then, DNA immunisation with a weakly immunogenic model antigen, luciferase, was investigated. After intradermal injection of the plasmid encoding luciferase, electrotransfer (HV 700 V/cm 100 micros, LV 200 V/cm 400 ms) was required to induce immune response. The response was Th1-shifted compared to immunisation with the luciferase recombinant protein. Finally, DNA electrotransfer in the skin, the muscle or the ear pinna was compared. Muscle DNA electrotransfer resulted in the highest luciferase expression and the best IgG response. Nevertheless electrotransfer into the skin, the muscle and the ear pinna all resulted in IFN-gamma secretion by luciferase-stimulated splenocytes suggesting that an efficient Th1 response was induced in all case.
Kell, Douglas B
2012-01-01
A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a ‘landscape’ representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems ‘hard’, but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the ‘best’ experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. PMID:22252984
Ugulu, Rex Asibuodu; Allen, Stephen
2017-12-01
The data presented in this article is an original data on "Investigating the role of onsite learning in the optimisation of craft gang's productivity in the construction industry". This article describes the constraints influencing craft gang's productivity and the influence of onsite learning on the blockwork craft gang's productivity. It also presented the method of data collection, using a semi-structured interview and an observation method to collect data from construction organisations. We provided statistics on the top most important constraints affecting the craft gang's productivity using 3-D Bar charts. In addition, we computed the correlation coefficients and the regression model on the influence of onsite learning on craft gang's productivity using the man-hour as the dependent variable. The relationship between blockwork inputs and cycle numbers was determined at 5% significance level. Finally, we presented data information on the application of the learning curve theory using the unit straight-line model equations and computed the learning rate of the observed craft gang's blockwork repetitive work.
An illustration of new methods in machine condition monitoring, Part I: stochastic resonance
NASA Astrophysics Data System (ADS)
Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.
2017-05-01
There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.
Optimising rigid motion compensation for small animal brain PET imaging
NASA Astrophysics Data System (ADS)
Spangler-Bickell, Matthew G.; Zhou, Lin; Kyme, Andre Z.; De Laat, Bart; Fulton, Roger R.; Nuyts, Johan
2016-10-01
Motion compensation (MC) in PET brain imaging of awake small animals is attracting increased attention in preclinical studies since it avoids the confounding effects of anaesthesia and enables behavioural tests during the scan. A popular MC technique is to use multiple external cameras to track the motion of the animal’s head, which is assumed to be represented by the motion of a marker attached to its forehead. In this study we have explored several methods to improve the experimental setup and the reconstruction procedures of this method: optimising the camera-marker separation; improving the temporal synchronisation between the motion tracker measurements and the list-mode stream; post-acquisition smoothing and interpolation of the motion data; and list-mode reconstruction with appropriately selected subsets. These techniques have been tested and verified on measurements of a moving resolution phantom and brain scans of an awake rat. The proposed techniques improved the reconstructed spatial resolution of the phantom by 27% and of the rat brain by 14%. We suggest a set of optimal parameter values to use for awake animal PET studies and discuss the relative significance of each parameter choice.
Kell, Douglas B
2012-03-01
A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a 'landscape' representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems 'hard', but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the 'best' experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. Copyright © 2012 WILEY Periodicals, Inc.
Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina
2010-09-29
To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.
Alvarez-Ros, Margarita Clara; Palafox, Mauricio Alcolea
2014-01-01
The five tautomers of the drug acyclovir (ACV) were determined and optimised at the MP2 and B3LYP quantum chemical levels of theory. The stability of the tautomers was correlated with different parameters. On the most stable tautomer N1 was carried out a comprehensive conformational analysis, and the whole conformational parameters (R, β, Φ, φ1, φ2, φ3, φ4, φ5) were studied as well as the NBO Natural atomic charges. The calculations were carried out with full relaxation of all geometrical parameters. The search located at least 78 stable structures within 8.5 kcal/mol electronic energy range of the global minimum, and classified in two groups according to the positive or negative value of the torsional angle φ1. In the nitrogen atoms and in the O2' and O5' oxygen atoms of the most stable conformer appear a higher reactivity than in the natural nucleoside deoxyguanosine. The solid state was simulated through a dimer and tetramer forms and the structural parameters were compared with the X-ray crystal data available. Several general conclusions were emphasized. PMID:24915059
Rushton, A; White, L; Heap, A; Heneghan, N; Goodwin, P
2016-01-01
Objectives To develop an optimised 1:1 physiotherapy intervention that reflects best practice, with flexibility to tailor management to individual patients, thereby ensuring patient-centred practice. Design Mixed-methods combining evidence synthesis, expert review and focus groups. Setting Secondary care involving 5 UK specialist spinal centres. Participants A purposive panel of clinical experts from the 5 spinal centres, comprising spinal surgeons, inpatient and outpatient physiotherapists, provided expert review of the draft intervention. Purposive samples of patients (n=10) and physiotherapists (n=10) (inpatient/outpatient physiotherapists managing patients with lumbar discectomy) were invited to participate in the focus groups at 1 spinal centre. Methods A draft intervention developed from 2 systematic reviews; a survey of current practice and research related to stratified care was circulated to the panel of clinical experts. Lead physiotherapists collaborated with physiotherapy and surgeon colleagues to provide feedback that informed the intervention presented at 2 focus groups investigating acceptability to patients and physiotherapists. The focus groups were facilitated by an experienced facilitator, recorded in written and tape-recorded forms by an observer. Tape recordings were transcribed verbatim. Data analysis, conducted by 2 independent researchers, employed an iterative and constant comparative process of (1) initial descriptive coding to identify categories and subsequent themes, and (2) deeper, interpretive coding and thematic analysis enabling concepts to emerge and overarching pattern codes to be identified. Results The intervention reflected best available evidence and provided flexibility to ensure patient-centred care. The intervention comprised up to 8 sessions of 1:1 physiotherapy over 8 weeks, starting 4 weeks postsurgery. The intervention was acceptable to patients and physiotherapists. Conclusions A rigorous process informed an optimised 1:1 physiotherapy intervention post-lumbar discectomy that reflects best practice. The developed intervention was agreed on by the 5 spinal centres for implementation in a randomised controlled trial to evaluate its effectiveness. PMID:26916690
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
Design and experimental validation of linear and nonlinear vehicle steering control strategies
NASA Astrophysics Data System (ADS)
Menhour, Lghani; Lechner, Daniel; Charara, Ali
2012-06-01
This paper proposes the design of three control laws dedicated to vehicle steering control, two based on robust linear control strategies and one based on nonlinear control strategies, and presents a comparison between them. The two robust linear control laws (indirect and direct methods) are built around M linear bicycle models, each of these control laws is composed of two M proportional integral derivative (PID) controllers: one M PID controller to control the lateral deviation and the other M PID controller to control the vehicle yaw angle. The indirect control law method is designed using an oscillation method and a nonlinear optimisation subject to H ∞ constraint. The direct control law method is designed using a linear matrix inequality optimisation in order to achieve H ∞ performances. The nonlinear control method used for the correction of the lateral deviation is based on a continuous first-order sliding-mode controller. The different methods are designed using a linear bicycle vehicle model with variant parameters, but the aim is to simulate the nonlinear vehicle behaviour under high dynamic demands with a four-wheel vehicle model. These steering vehicle controls are validated experimentally using the data acquired using a laboratory vehicle, Peugeot 307, developed by National Institute for Transport and Safety Research - Department of Accident Mechanism Analysis Laboratory's (INRETS-MA) and their performance results are compared. Moreover, an unknown input sliding-mode observer is introduced to estimate the road bank angle.
Women's health: a new global agenda
Peters, Sanne A E; Woodward, Mark; Jha, Vivekanand; Kennedy, Stephen; Norton, Robyn
2016-01-01
Global efforts to improve the health of women largely focus on improving sexual and reproductive health. However, the global burden of disease has changed significantly over the past decades. Currently, the greatest burden of death and disability among women is attributable to non-communicable diseases (NCDs), most notably cardiovascular diseases, cancers, respiratory diseases, diabetes, dementia, depression and musculoskeletal disorders. Hence, to improve the health of women most efficiently, adequate resources need to be allocated to the prevention, management and treatment of NCDs in women. Such an approach could reduce the burden of NCDs among women and also has the potential to improve women's sexual and reproductive health, which commonly shares similar behavioural, biological, social and cultural risk factors. Historically, most medical research was conducted in men and the findings from such studies were assumed to be equally applicable to women. Sex differences and gender disparities in health and disease have therefore long been unknown and/or ignored. Since the number of women in studies is increasing, evidence for clinically meaningful differences between men and women across all areas of health and disease has emerged. Systematic evaluation of such differences between men and women could improve the understanding of diseases, as well as inform health practitioners and policymakers in optimising preventive strategies to reduce the global burden of disease more efficiently in women and men. PMID:28588958
NASA Astrophysics Data System (ADS)
Humpage, Neil; Bösch, Hartmut; Palmer, Paul I.; Parr-Burman, Phil M.; Vick, Andrew J. A.; Bezawada, Naidu N.; Black, Martin; Born, Andrew J.; Pearson, David; Strachan, Jonathan; Wells, Martyn
2014-10-01
The tropospheric distribution of greenhouse gases (GHGs) depends on surface flux variations, atmospheric chemistry and transport processes over a range of spatial and temporal scales. Accurate and precise atmospheric concentration observations of GHGs can be used to infer surface flux estimates, though their interpretation relies on unbiased atmospheric transport models. GHOST is a novel, compact shortwave infrared spectrometer which will observe tropospheric columns of CO2, CO, CH4 and H2O (along with the HDO/H2O ratio) during deployment on board the NASA Global Hawk unmanned aerial vehicle. The primary science objectives of GHOST are to: 1) test atmospheric transport models; 2) evaluate satellite observations of GHG column observations over oceans; and 3) complement in-situ tropopause transition layer observations from other Global Hawk instruments. GHOST comprises a target acquisition module (TAM), a fibre slicer and feed system, and a multiple order spectrograph. The TAM is programmed to direct solar radiation reflected by the ocean surface into a fibre optic bundle. Incoming light is then split into four spectral bands, selected to optimise remote observations of GHGs. The design uses a single grating and detector for all four spectral bands. We summarise the GHOST concept and its objectives, and describe the instrument design and proposed deployment aboard the Global Hawk platform.
The Aqua-Planet Experiment (APE): CONTROL SST Simulation
NASA Technical Reports Server (NTRS)
Blackburn, Michael; Williamson, David L.; Nakajima, Kensuke; Ohfuchi, Wataru; Takahashi, Yoshiyuki O.; Hayashi, Yoshi-Yuki; Nakamura, Hisashi; Ishiwatari, Masaki; Mcgregor, John L.; Borth, Hartmut;
2013-01-01
Climate simulations by 16 atmospheric general circulation models (AGCMs) are compared on an aqua-planet, a water-covered Earth with prescribed sea surface temperature varying only in latitude. The idealised configuration is designed to expose differences in the circulation simulated by different models. Basic features of the aqua-planet climate are characterised by comparison with Earth. The models display a wide range of behaviour. The balanced component of the tropospheric mean flow, and mid-latitude eddy covariances subject to budget constraints, vary relatively little among the models. In contrast, differences in damping in the dynamical core strongly influence transient eddy amplitudes. Historical uncertainty in modelled lower stratospheric temperatures persists in APE.Aspects of the circulation generated more directly by interactions between the resolved fluid dynamics and parameterized moist processes vary greatly. The tropical Hadley circulation forms either a single or double inter-tropical convergence zone (ITCZ) at the equator, with large variations in mean precipitation. The equatorial wave spectrum shows a wide range of precipitation intensity and propagation characteristics. Kelvin mode-like eastward propagation with remarkably constant phase speed dominates in most models. Westward propagation, less dispersive than the equatorial Rossby modes, dominates in a few models or occurs within an eastward propagating envelope in others. The mean structure of the ITCZ is related to precipitation variability, consistent with previous studies.The aqua-planet global energy balance is unknown but the models produce a surprisingly large range of top of atmosphere global net flux, dominated by differences in shortwave reflection by clouds. A number of newly developed models, not optimised for Earth climate, contribute to this. Possible reasons for differences in the optimised models are discussed.The aqua-planet configuration is intended as one component of an experimental hierarchy used to evaluate AGCMs. This comparison does suggest that the range of model behaviour could be better understood and reduced in conjunction with Earth climate simulations. Controlled experimentation is required to explore individual model behavior and investigate convergence of the aqua-planet climate with increasing resolution.
NASA Astrophysics Data System (ADS)
Sánchez, D.; Muñoz de Escalona, J. M.; Monje, B.; Chacartegui, R.; Sánchez, T.
This article presents a novel proposal for complex hybrid systems comprising high temperature fuel cells and thermal engines. In this case, the system is composed by a molten carbonate fuel cell with cascaded hot air turbine and Organic Rankine Cycle (ORC), a layout that is based on subsequent waste heat recovery for additional power production. The work will credit that it is possible to achieve 60% efficiency even if the fuel cell operates at atmospheric pressure. The first part of the analysis focuses on selecting the working fluid of the Organic Rankine Cycle. After a thermodynamic optimisation, toluene turns out to be the most efficient fluid in terms of cycle performance. However, it is also detected that the performance of the heat recovery vapour generator is equally important, what makes R245fa be the most interesting fluid due to its balanced thermal and HRVG efficiencies that yield the highest global bottoming cycle efficiency. When this fluid is employed in the compound system, conservative operating conditions permit achieving 60% global system efficiency, therefore accomplishing the initial objective set up in the work. A simultaneous optimisation of gas turbine (pressure ratio) and ORC (live vapour pressure) is then presented, to check if the previous results are improved or if the fluid of choice must be replaced. Eventually, even if system performance improves for some fluids, it is concluded that (i) R245fa is the most efficient fluid and (ii) the operating conditions considered in the previous analysis are still valid. The work concludes with an assessment about safety-related aspects of using hydrocarbons in the system. Flammability is studied, showing that R245fa is the most interesting fluid also in this regard due to its inert behaviour, as opposed to the other fluids under consideration all of which are highly flammable.
Ebels, Kelly; Faulx, Dunia; Gerth-Guyette, Emily; Murunga, Peninah; Mahapatro, Samarendra; Das, Manoja Kumar; Ginsburg, Amy Sarah
2016-01-01
Pneumonia is the leading cause of death from infection in children worldwide. Despite global treatment recommendations that call for children with pneumonia to receive amoxicillin dispersible tablets, only one-third of children with pneumonia receive any antibiotics and many do not complete the full course of treatment. Poor adherence to antibiotics may be driven in part by a lack of user-friendly treatment instructions. In order to optimise childhood pneumonia treatment adherence at the community level, we developed a user-friendly product presentation for caregivers and a job aid for healthcare providers (HCPs). This paper aims to document the development process and offers a model for future health communication tools. We employed an iterative design process that included document review, key stakeholder interviews, engagement with a graphic designer and pre-testing design concepts among target users in India and Kenya. The consolidated criteria for reporting qualitative research were used in the description of results. Though resources for pneumonia treatment are available in some countries, their content is incomplete and inconsistent with global recommendations. Document review and stakeholder interviews provided the information necessary to convey to caregivers and recommendations for how to present this information. Target users in India and Kenya confirmed the need to support better treatment adherence, recommended specific modifications to design concepts and suggested the development of a companion job aid. There was a consensus among caregivers and HCPs that these tools would be helpful and improve adherence behaviours. The development of user-friendly instructions for medications for use in low-resource settings is a critically important but time-intensive and resource-intensive process that should involve engagement with target audiences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Optimization of microwave-assisted extraction of polyphenols from Myrtus communis L. leaves.
Dahmoune, Farid; Nayak, Balunkeswar; Moussi, Kamal; Remini, Hocine; Madani, Khodir
2015-01-01
Phytochemicals, such as phenolic compounds, are of great interest due to their health-benefitting antioxidant properties and possible protection against inflammation, cardiovascular diseases and certain types of cancer. Maximum retention of these phytochemicals during extraction requires optimised process parameter conditions. A microwave-assisted extraction (MAE) method was investigated for extraction of total phenolics from Myrtus communis leaves. The total phenolic capacity (TPC) of leaf extracts at optimised MAE conditions was compared with ultrasound-assisted extraction (UAE) and conventional solvent extraction (CSE). The influence of extraction parameters including ethanol concentration, microwave power, irradiation time and solvent-to-solid ratio on the extraction of TPC was modeled by using a second-order regression equation. The optimal MAE conditions were 42% ethanol concentration, 500 W microwave power, 62 s irradiation time and 32 mL/g solvent to material ratio. Ethanol concentration and liquid-to-solid ratio were the significant parameters for the extraction process (p<0.01). Under the MAE optimised conditions, the recovery of TPC was 162.49 ± 16.95 mg gallic acidequivalent/gdry weight(DW), approximating the predicted content (166.13 mg GAE/g DW). When bioactive phytochemicals extracted from Myrtus leaves using MAE compared with UAE and CSE, it was also observed that tannins (32.65 ± 0.01 mg/g), total flavonoids (5.02 ± 0.05 mg QE/g) and antioxidant activities (38.20 ± 1.08 μg GAE/mL) in MAE extracts were higher than the other two extracts. These findings further illustrate that extraction of bioactive phytochemicals from plant materials using MAE method consumes less extraction solvent and saves time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Metric optimisation for analogue forecasting by simulated annealing
NASA Astrophysics Data System (ADS)
Bliefernicht, J.; Bárdossy, A.
2009-04-01
It is well known that weather patterns tend to recur from time to time. This property of the atmosphere is used by analogue forecasting techniques. They have a long history in weather forecasting and there are many applications predicting hydrological variables at the local scale for different lead times. The basic idea of the technique is to identify past weather situations which are similar (analogue) to the predicted one and to take the local conditions of the analogues as forecast. But the forecast performance of the analogue method depends on user-defined criteria like the choice of the distance function and the size of the predictor domain. In this study we propose a new methodology of optimising both criteria by minimising the forecast error with simulated annealing. The performance of the methodology is demonstrated for the probability forecast of daily areal precipitation. It is compared with a traditional analogue forecasting algorithm, which is used operational as an element of a hydrological forecasting system. The study is performed for several meso-scale catchments located in the Rhine basin in Germany. The methodology is validated by a jack-knife method in a perfect prognosis framework for a period of 48 years (1958-2005). The predictor variables are derived from the NCEP/NCAR reanalysis data set. The Brier skill score and the economic value are determined to evaluate the forecast skill and value of the technique. In this presentation we will present the concept of the optimisation algorithm and the outcome of the comparison. It will be also demonstrated how a decision maker should apply a probability forecast to maximise the economic benefit from it.
Sadler, Euan; Fisher, Helen R.; Maher, John; Wolfe, Charles D. A.; McKevitt, Christopher
2016-01-01
Introduction Translational research is central to international health policy, research and funding initiatives. Despite increasing use of the term, the translation of basic science discoveries into clinical practice is not straightforward. This systematic search and narrative synthesis aimed to examine factors enabling or hindering translational research from the perspective of basic and clinician scientists, a key stakeholder group in translational research, and to draw policy-relevant implications for organisations seeking to optimise translational research opportunities. Methods and Results We searched SCOPUS and Web of Science from inception until April 2015 for papers reporting scientists’ views of the factors they perceive as enabling or hindering the conduct of translational research. We screened 8,295 papers from electronic database searches and 20 papers from hand searches and citation tracking, identifying 26 studies of qualitative, quantitative or mixed method designs. We used a narrative synthesis approach and identified the following themes: 1) differing concepts of translational research 2) research processes as a barrier to translational research; 3) perceived cultural divide between research and clinical care; 4) interdisciplinary collaboration as enabling translation research, but dependent on the quality of prior and current social relationships; 5) translational research as entrepreneurial science. Across all five themes, factors enabling or hindering translational research were largely shaped by wider social, organisational, and structural factors. Conclusion To optimise translational research, policy could consider refining translational research models to better reflect scientists’ experiences, fostering greater collaboration and buy in from all types of scientists. Organisations could foster cultural change, ensuring that organisational practices and systems keep pace with the change in knowledge production brought about by the translational research agenda. PMID:27490373
The development and optimisation of 3D black-blood R2* mapping of the carotid artery wall.
Yuan, Jianmin; Graves, Martin J; Patterson, Andrew J; Priest, Andrew N; Ruetten, Pascal P R; Usman, Ammara; Gillard, Jonathan H
2017-12-01
To develop and optimise a 3D black-blood R 2 * mapping sequence for imaging the carotid artery wall, using optimal blood suppression and k-space view ordering. Two different blood suppression preparation methods were used; Delay Alternating with Nutation for Tailored Excitation (DANTE) and improved Motion Sensitive Driven Equilibrium (iMSDE) were each combined with a three-dimensional (3D) multi-echo Fast Spoiled GRadient echo (ME-FSPGR) readout. Three different k-space view-order designs: Radial Fan-beam Encoding Ordering (RFEO), Distance-Determined Encoding Ordering (DDEO) and Centric Phase Encoding Order (CPEO) were investigated. The sequences were evaluated through Bloch simulation and in a cohort of twenty volunteers. The vessel wall Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and R 2 *, and the sternocleidomastoid muscle R 2 * were measured and compared. Different numbers of acquisitions-per-shot (APS) were evaluated to further optimise the effectiveness of blood suppression. All sequences resulted in comparable R 2 * measurements to a conventional, i.e. non-blood suppressed sequence in the sternocleidomastoid muscle of the volunteers. Both Bloch simulations and volunteer data showed that DANTE has a higher signal intensity and results in a higher image SNR than iMSDE. Blood suppression efficiency was not significantly different when using different k-space view orders. Smaller APS achieved better blood suppression. The use of blood-suppression preparation methods does not affect the measurement of R 2 *. DANTE prepared ME-FSPGR sequence with a small number of acquisitions-per-shot can provide high quality black-blood R 2 * measurements of the carotid vessel wall. Copyright © 2017 Elsevier Inc. All rights reserved.
Chisholm, Rebecca H; Lorenzi, Tommaso; Clairambault, Jean
2016-11-01
Drug-induced drug resistance in cancer has been attributed to diverse biological mechanisms at the individual cell or cell population scale, relying on stochastically or epigenetically varying expression of phenotypes at the single cell level, and on the adaptability of tumours at the cell population level. We focus on intra-tumour heterogeneity, namely between-cell variability within cancer cell populations, to account for drug resistance. To shed light on such heterogeneity, we review evolutionary mechanisms that encompass the great evolution that has designed multicellular organisms, as well as smaller windows of evolution on the time scale of human disease. We also present mathematical models used to predict drug resistance in cancer and optimal control methods that can circumvent it in combined therapeutic strategies. Plasticity in cancer cells, i.e., partial reversal to a stem-like status in individual cells and resulting adaptability of cancer cell populations, may be viewed as backward evolution making cancer cell populations resistant to drug insult. This reversible plasticity is captured by mathematical models that incorporate between-cell heterogeneity through continuous phenotypic variables. Such models have the benefit of being compatible with optimal control methods for the design of optimised therapeutic protocols involving combinations of cytotoxic and cytostatic treatments with epigenetic drugs and immunotherapies. Gathering knowledge from cancer and evolutionary biology with physiologically based mathematical models of cell population dynamics should provide oncologists with a rationale to design optimised therapeutic strategies to circumvent drug resistance, that still remains a major pitfall of cancer therapeutics. This article is part of a Special Issue entitled "System Genetics" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang. Copyright © 2016 Elsevier B.V. All rights reserved.
Boundy, Michael J; Selwood, Andrew I; Harwood, D Tim; McNabb, Paul S; Turner, Andrew D
2015-03-27
Routine regulatory monitoring of paralytic shellfish toxins (PST) commonly employs oxidative derivitisation and complex liquid chromatography fluorescence detection methods (LC-FL). The pre-column oxidation LC-FL method is currently implemented in New Zealand and the United Kingdom. When using this method positive samples are fractionated and two different oxidations are required to confirm the identity and quantity of each PST analogue present. There is a need for alternative methods that are simpler, provide faster turnaround times and have improved detection limits. Hydrophilic interaction liquid chromatography (HILIC) HPLC-MS/MS analysis of PST has been used for research purposes, but high detection limits and substantial sample matrix issues have prevented it from becoming a viable alternative for routine monitoring purposes. We have developed a HILIC UPLC-MS/MS method for paralytic shellfish toxins with an optimised desalting clean-up procedure on inexpensive carbon solid phase extraction cartridges for reduction of matrix interferences. This represents a major technical breakthrough and allows sensitive, selective and rapid analysis of paralytic shellfish toxins from a variety of sample types, including many commercially produced bivalve molluscan shellfish species. Additionally, this analytical approach avoids the need for complex calculations to determine sample toxicity, as unlike other methods each PST analogue is able to be quantified as a single resolved peak. This article presents the method development and optimisation information. A thorough single laboratory validation study has subsequently been performed and this data will be presented elsewhere. Copyright © 2015 Elsevier B.V. All rights reserved.
Van Dyk, Jacob; Zubizarreta, Eduardo; Lievens, Yolande
2017-11-01
With increasing recognition of growing cancer incidence globally, efficient means of expanding radiotherapy capacity is imperative, and understanding the factors impacting human and financial needs is valuable. A time-driven activity-based costing analysis was performed, using a base case of 2-machine departments, with defined cost inputs and operating parameters. Four income groups were analysed, ranging from low to high income. Scenario analyses included department size, operating hours, fractionation, treatment complexity, efficiency, and centralised versus decentralised care. The base case cost/course is US$5,368 in HICs, US$2,028 in LICs; the annual operating cost is US$4,595,000 and US$1,736,000, respectively. Economies of scale show cost/course decreasing with increasing department size, mainly related to the equipment cost and most prominent up to 3 linacs. The cost in HICs is two or three times as high as in U-MICs or LICs, respectively. Decreasing operating hours below 8h/day has a dramatic impact on the cost/course. IMRT increases the cost/course by 22%. Centralising preparatory activities has a moderate impact on the costs. The results indicate trends that are useful for optimising local and regional circumstances. This methodology can provide input into a uniform and accepted approach to evaluating the cost of radiotherapy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Carl, Christina; Poole, Andrew J.; Williams, Mike R.; de Nys, Rocky
2012-01-01
The global mussel aquaculture industry uses specialised spat catching and nursery culture ropes made of multi-filament synthetic and natural fibres to optimise settlement and retention of mussels for on-growing. However, the settlement ecology and preferences of mussels are poorly understood and only sparse information exists in a commercial context. This study quantified the settlement preferences of pediveligers and plantigrades of Mytilus galloprovincialis on increasingly complex surfaces and settlement locations at a micro spatial scale on and within ropes under commercial hatchery operating conditions using optical microscopy and X-ray micro-computed tomography (µCT). M. galloprovincialis has clear settlement preferences for more complex materials and high selectivity for settlement sites from the pediveliger through to the plantigrade stage. Pediveligers of M. galloprovincialis initially settle inside specialised culture ropes. Larger pediveligers were located close to the exterior of ropes as they increased in size over time. In contrast, smaller individuals were located deeper inside of the ropes over time. This study demonstrates that X-ray µCT is an excellent non-destructive technique for mapping settlement and attachment sites of individuals as early as one day post settlement, and quantifies the number and location of settled individuals on and within ropes as a tool to understand and optimise settlement in complex multi-dimensional materials and environments. PMID:23251710
Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.
Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne
2017-01-01
Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.
Computational optimisation of targeted DNA sequencing for cancer detection
NASA Astrophysics Data System (ADS)
Martinez, Pierre; McGranahan, Nicholas; Birkbak, Nicolai Juul; Gerlinger, Marco; Swanton, Charles
2013-12-01
Despite recent progress thanks to next-generation sequencing technologies, personalised cancer medicine is still hampered by intra-tumour heterogeneity and drug resistance. As most patients with advanced metastatic disease face poor survival, there is need to improve early diagnosis. Analysing circulating tumour DNA (ctDNA) might represent a non-invasive method to detect mutations in patients, facilitating early detection. In this article, we define reduced gene panels from publicly available datasets as a first step to assess and optimise the potential of targeted ctDNA scans for early tumour detection. Dividing 4,467 samples into one discovery and two independent validation cohorts, we show that up to 76% of 10 cancer types harbour at least one mutation in a panel of only 25 genes, with high sensitivity across most tumour types. Our analyses demonstrate that targeting ``hotspot'' regions would introduce biases towards in-frame mutations and would compromise the reproducibility of tumour detection.
Hasler, B; Delabouglise, A; Babo Martins, S
2017-04-01
The primary role of animal health economics is to inform decision-making by determining optimal investments for animal health. Animal health surveillance produces information to guide interventions. Consequently, investments in surveillance and intervention must be evaluated together. This article explores the different theoretical frameworks and methods developed to assess and optimise the spending of resources in surveillance and intervention and their technical interdependence. The authors present frameworks that define the relationship between health investment and losses due to disease, and the relationship between surveillance and intervention resources. Surveillance and intervention are usually considered as technical substitutes, since increased investments in surveillance reduce the level of intervention resources required to reach the same benefit. The authors also discuss approaches used to quantify externalities and non-monetary impacts. Finally, they describe common economic evaluation types, including optimisation, acceptability and least-cost studies.
McLean, Ben; Eveleens, Clothilde A; Mitchell, Izaac; Webber, Grant B; Page, Alister J
2017-10-11
Low-dimensional carbon and boron nitride nanomaterials - hexagonal boron nitride, graphene, boron nitride nanotubes and carbon nanotubes - remain at the forefront of advanced materials research. Catalytic chemical vapour deposition has become an invaluable technique for reliably and cost-effectively synthesising these materials. In this review, we will emphasise how a synergy between experimental and theoretical methods has enhanced the understanding and optimisation of this synthetic technique. This review examines recent advances in the application of CVD to synthesising boron nitride and carbon nanomaterials and highlights where, in many cases, molecular simulations and quantum chemistry have provided key insights complementary to experimental investigation. This synergy is particularly prominent in the field of carbon nanotube and graphene CVD synthesis, and we propose here it will be the key to future advances in optimisation of CVD synthesis of boron nitride nanomaterials, boron nitride - carbon composite materials, and other nanomaterials generally.
Suppressing unsteady flow in arterio-venous fistulae
NASA Astrophysics Data System (ADS)
Grechy, L.; Iori, F.; Corbett, R. W.; Shurey, S.; Gedroyc, W.; Duncan, N.; Caro, C. G.; Vincent, P. E.
2017-10-01
Arterio-Venous Fistulae (AVF) are regarded as the "gold standard" method of vascular access for patients with end-stage renal disease who require haemodialysis. However, a large proportion of AVF do not mature, and hence fail, as a result of various pathologies such as Intimal Hyperplasia (IH). Unphysiological flow patterns, including high-frequency flow unsteadiness, associated with the unnatural and often complex geometries of AVF are believed to be implicated in the development of IH. In the present study, we employ a Mesh Adaptive Direct Search optimisation framework, computational fluid dynamics simulations, and a new cost function to design a novel non-planar AVF configuration that can suppress high-frequency unsteady flow. A prototype device for holding an AVF in the optimal configuration is then fabricated, and proof-of-concept is demonstrated in a porcine model. Results constitute the first use of numerical optimisation to design a device for suppressing potentially pathological high-frequency flow unsteadiness in AVF.
Magnetic resonance imaging-guided surgical design: can we optimise the Fontan operation?
Haggerty, Christopher M; Yoganathan, Ajit P; Fogel, Mark A
2013-12-01
The Fontan procedure, although an imperfect solution for children born with a single functional ventricle, is the only reconstruction at present short of transplantation. The haemodynamics associated with the total cavopulmonary connection, the modern approach to Fontan, are severely altered from the normal biventricular circulation and may contribute to the long-term complications that are frequently noted. Through recent technological advances, spear-headed by advances in medical imaging, it is now possible to virtually model these surgical procedures and evaluate the patient-specific haemodynamics as part of the pre-operative planning process. This is a novel paradigm with the potential to revolutionise the approach to Fontan surgery, help to optimise the haemodynamic results, and improve patient outcomes. This review provides a brief overview of these methods, presents preliminary results of their clinical usage, and offers insights into its potential future directions.
Shape and energy consistent pseudopotentials for correlated electron systems
Needs, R. J.
2017-01-01
A method is developed for generating pseudopotentials for use in correlated-electron calculations. The paradigms of shape and energy consistency are combined and defined in terms of correlated-electron wave-functions. The resulting energy consistent correlated electron pseudopotentials (eCEPPs) are constructed for H, Li–F, Sc–Fe, and Cu. Their accuracy is quantified by comparing the relaxed molecular geometries and dissociation energies which they provide with all electron results, with all quantities evaluated using coupled cluster singles, doubles, and triples calculations. Errors inherent in the pseudopotentials are also compared with those arising from a number of approximations commonly used with pseudopotentials. The eCEPPs provide a significant improvement in optimised geometries and dissociation energies for small molecules, with errors for the latter being an order-of-magnitude smaller than for Hartree-Fock-based pseudopotentials available in the literature. Gaussian basis sets are optimised for use with these pseudopotentials. PMID:28571391
Thermal Performance Analysis of Solar Collectors Installed for Combisystem in the Apartment Building
NASA Astrophysics Data System (ADS)
Žandeckis, A.; Timma, L.; Blumberga, D.; Rochas, C.; Rošā, M.
2012-01-01
The paper focuses on the application of wood pellet and solar combisystem for space heating and hot water preparation at apartment buildings under the climate of Northern Europe. A pilot project has been implemented in the city of Sigulda (N 57° 09.410 E 024° 52.194), Latvia. The system was designed and optimised using TRNSYS - a dynamic simulation tool. The pilot project was continuously monitored. To the analysis the heat transfer fluid flow rate and the influence of the inlet temperature on the performance of solar collectors were subjected. The thermal performance of a solar collector loop was studied using a direct method. A multiple regression analysis was carried out using STATGRAPHICS Centurion 16.1.15 with the aim to identify the operational and weather parameters of the system which cause the strongest influence on the collector's performance. The parameters to be used for the system's optimisation have been evaluated.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Zhang, Fan; Yang, Yi; Su, Ping; Guo, Zhenku
2009-01-01
Euonymus alatus (Thunb.) has been used as one of traditional Chinese medicines for several thousand years. Conventional methods for the extraction of rutin and quercetin from E. alatus, including solvent extraction, Soxhlet extraction and heat reflux extraction are characterised by long extraction times and consumption of large amounts of solvents. To develop a simple and rapid method for the extraction of rutin and quercetin from the stalks of Euonymus alatus (Thunb.) Sieb using microwave-assisted extraction (MAE) technique. MAE experiments were performed with a multimode microwave extraction system. The experimental variables that affect the MAE process, such as the concentration of ethanol solution, extractant volume, microwave power and extraction time were optimised. Yields were determined by HPLC. The results were compared with that obtained by classical Soxhlet and ultrasonic-assisted extraction (UAE). From the optimised conditions for MAE of rutin and quercetin it can be concluded that the solvent is 50% ethanol (v/v) solution, the extractant volume is 40 mL, microwave power is 170 W and irradiation time is 6 min. Compared with Soxhlet extraction and ultrasonic extraction, microwave extraction is a rapid method with a higher yield and lower solvent consumption. The results showed that MAE can be used as an efficient and rapid method for the extraction of the active components from plants.
NASA Astrophysics Data System (ADS)
Arjunan, V.; Santhanam, R.; Marchewka, M. K.; Mohan, S.
2014-03-01
O-desmethyltramadol is one of the main metabolites of tramadol widely used clinically and has analgesic activity. The FTIR and FT-Raman spectra of O-desmethyl tramadol hydrochloride are recorded in the solid phase in the regions 4000-400 cm-1 and 4000-100 cm-1, respectively. The observed fundamentals are assigned to different normal modes of vibration. Theoretical studies have been performed as its hydrochloride salt. The structure of the compound has been optimised with B3LYP method using 6-31G** and cc-pVDZ basis sets. The optimised bond length and bond angles are correlated with the X-ray data. The experimental wavenumbers were compared with the scaled vibrational frequencies determined by DFT methods. The IR and Raman intensities are determined with B3LYP method using cc-pVDZ and 6-31G(d,p) basic sets. The total electron density and molecular electrostatic potential surfaces of the molecule are constructed by using B3LYP/cc-pVDZ method to display electrostatic potential (electron + nuclei) distribution. The electronic properties HOMO and LUMO energies were measured. Natural bond orbital analysis of O-desmethyltramadol hydrochloride has been performed to indicate the presence of intramolecular charge transfer. The 1H and 13C NMR chemical shifts of the molecule have been anlysed.
ERIC Educational Resources Information Center
Lucas, Rebecca; James, Alana I.
2018-01-01
Mentoring is often recommended to universities as a way of supporting students with Autism Spectrum Disorders (ASD) and/or mental health conditions (MHC), but there is little literature on optimising this support. We used mixed-methods to evaluate mentees' and mentors' experiences of a specialist mentoring programme. Mentees experienced academic,…
[Oral communication, for the spread and transfer of knowledge].
Beloni, Pascale; Berger, Valérie; Peoc'h, Nadia
2013-12-01
The continuous improvement of the quality of care in the personalised management of each patient requires practitioners to consciously use the best current data resulting from clinical research. The oral communication of the results of research work is one of the methods of optimising the scientific, pedagogical and social value of nursing and allied healthcare research.
ERIC Educational Resources Information Center
Baeten, Marlies; Struyven, Katrien; Dochy, Filip
2013-01-01
This paper investigates dynamics in approaches to learning within different learning environments. Two quasi-experimental studies were conducted with first-year student teachers (N[subscript Study 1] = 496, N[subscript Study 2] = 1098) studying a child development course. Data collection was carried out using a pre-test/post-test design by means…
Devikanniga, D; Joshua Samuel Raj, R
2018-04-01
Osteoporosis is a life threatening disease which commonly affects women mostly after their menopause. It primarily causes mild bone fractures, which on advanced stage leads to the death of an individual. The diagnosis of osteoporosis is done based on bone mineral density (BMD) values obtained through various clinical methods experimented from various skeletal regions. The main objective of the authors' work is to develop a hybrid classifier model that discriminates the osteoporotic patient from healthy person, based on BMD values. In this Letter, the authors propose the monarch butterfly optimisation-based artificial neural network classifier which helps in earlier diagnosis and prevention of osteoporosis. The experiments were conducted using 10-fold cross-validation method for two datasets lumbar spine and femoral neck. The results were compared with other similar hybrid approaches. The proposed method resulted with the accuracy, specificity and sensitivity of 97.9% ± 0.14, 98.33% ± 0.03 and 95.24% ± 0.08, respectively, for lumbar spine dataset and 99.3% ± 0.16%, 99.2% ± 0.13 and 100, respectively, for femoral neck dataset. Further, its performance is compared using receiver operating characteristics analysis and Wilcoxon signed-rank test. The results proved that the proposed classifier is efficient and it outperformed the other approaches in all the cases.
Low, Ying Wei Ivan; Blasco, Francesca; Vachaspati, Prakash
2016-09-20
Lipophilicity is one of the molecular properties assessed in early drug discovery. Direct measurement of the octanol-water distribution coefficient (logD) requires an analytical method with a large dynamic range or multistep dilutions, as the analyte's concentrations span across several orders of magnitude. In addition, water/buffer and octanol phases which have very different polarity could lead to matrix effects and affect the LC-MS response, leading to erroneous logD values. Most compound libraries use DMSO stocks as it greatly reduces the sample requirement but the presence of DMSO has been shown to underestimate the lipophilicity of the analyte. The present work describes the development of an optimised shake flask logD method using deepwell 96 well plate that addresses the issues related to matrix effects, DMSO concentration and incubation conditions and is also amenable to high throughput. Our results indicate that the equilibrium can be achieved within 30min by flipping the plate on its side while even 0.5% of DMSO is not tolerated in the assay. This study uses the matched matrix concept to minimise the errors in analysing the two phases namely buffer and octanol in LC-MS. Copyright © 2016 Elsevier B.V. All rights reserved.
Modelling the firing pattern of bullfrog vestibular neurons responding to naturalistic stimuli
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
1999-01-01
We have developed a neural system identification method for fitting models to stimulus-response data, where the response is a spike train. The method involves using a general nonlinear optimisation procedure to fit models in the time domain. We have applied the method to model bullfrog semicircular canal afferent neuron responses during naturalistic, broad-band head rotations. These neurons respond in diverse ways, but a simple four parameter class of models elegantly accounts for the various types of responses observed. c1999 Elsevier Science B.V. All rights reserved.
Boughen, Melanie; Sutton, Jane; Fenn, Tess
2017-01-01
Background: Traditionally, pharmacy technicians have worked alongside pharmacists in community and hospital pharmacy. Changes within pharmacy provide opportunity for role expansion and with no apparent career pathway, there is a need to define the current pharmacy technician role and role in medicines optimisation. Aim: To capture the current roles of pharmacy technicians and identify how their future role will contribute to medicines optimisation. Methods: Following ethical approval and piloting, an online survey to ascertain pharmacy technicians’ views about their roles was undertaken. Recruitment took place in collaboration with the Association of Pharmacy Technicians UK. Data were exported to SPSS, data screened and descriptive statistics produced. Free text responses were analysed and tasks collated into categories reflecting the type of work involved in each task. Results: Responses received were 393 (28%, n = 1380). Results were organised into five groups: i.e., hospital, community, primary care, General Practitioner (GP) practice and other (which included HM Prison Service). Thirty tasks were reported as commonly undertaken in three or more settings and 206 (84.7%, n = 243) pharmacy technicians reported they would like to expand their role. Conclusions: Tasks core to hospital and community pharmacy should be considered for inclusion to initial education standards to reflect current practice. Post qualification, pharmacy technicians indicate a significant desire to expand clinically and managerially allowing pharmacists more time in patient-facing/clinical roles. PMID:28970452
Mohamad, Nurhidayatul Asma; Mustafa, Shuhaimi; El Sheikha, Aly Farag; Khairil Mokhtar, Nur Fadhilah; Ismail, Amin; Ali, Md Eaqub
2016-05-01
Poor quality and quantity of DNA extracted from gelatin and gelatin capsules often causes failure in the determination of animal species using PCR. Gelatin, which is mainly derived from porcine and bovine, has been a matter of concern among customers in order to fulfill religious obligation and safety precaution against several transmissible infectious diseases associated with bovine species. Thus, optimised DNA extraction from gelatin is very important for successful real-time PCR detection of gelatin species. In this work, the DNA extraction method was optimised in terms of lysis incubation period and inclusion of pre-treatment pH modification of samples. The yield of DNA extracted from porcine gelatin was significantly increased when the pH of the samples was adjusted to pH 8.5 prior to DNA precipitation with isopropanol. The optimal pH for DNA precipitation from bovine gelatin solution was then determined at the original pH range of solution: pH 7.6 to 8. A DNA fragment of approximately 300 base pairs was available for PCR amplification. DNA extracted from gelatin and commercially available capsules has been successfully utilised for species detection using real-time PCR assay. However, significant adulterations of porcine and bovine in pure gelatin and capsules have been detected, which require further analytical techniques for validation. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Optimising hydrogen peroxide measurement in exhaled breath condensate.
Brooks, Wendy M; Lash, Heath; Kettle, Anthony J; Epton, Michael J
2006-01-01
Exhaled breath condensate (EBC) analysis has been proposed as a non-invasive method of assessing airway pathology. A number of substances, including hydrogen peroxide (H2O2), have been measured in EBC, without adequate published details of validation and optimisation. To explore factors that affect accurate quantitation of H2O2 in EBC. H2O2 was measured in EBC samples using fluorometry with 4-hydroxyphenylacetic acid. A number of factors that might alter quantitation were studied including pH and buffering conditions, reagent storage, and assay temperature. Standard curve slope was significantly altered by pH, leading to a potential difference in H2O2 quantification of up to 42%. These differences were resolved by increasing the buffering capacity of the reaction mix. H2O2 added to EBC remained stable for 1 h when stored on ice. The assay was unaffected by freezing assay reagents. The limit of detection for H2O2 ranged from 3.4 nM to 8.8 nM depending on the buffer used. The reagents required for this assay can be stored for several months allowing valuable consistency in longitudinal studies. The quantitation of H2O2 in EBC is pH-dependent but increasing assay buffering reduces this effect. Sensitive reproducible quantitation of H2O2 in EBC requires rigorous optimisation.
Lampe, Lisa
2015-08-01
To outline the problems around overlap between social phobia (SAD) and avoidant personality disorder (AVPD) and provide guidelines that may assist clinicians to differentiate these conditions. A constellation of symptoms can be identified that may distinguish AVPD from SAD, with key features being a strong and pervasively negative self-concept, a view of rejection as equating to a global evaluation of the individual as being of little worth and a sense of not fitting in socially that dates from early childhood. It is important to identify the presence of AVPD in order to anticipate potential problems with engagement and retention in therapy, to target treatment interventions and optimise outcome. © The Royal Australian and New Zealand College of Psychiatrists 2015.
The Molecular Design of Active Sites in Nanoporous Materials for Sustainable Catalysis.
Chapman, Stephanie; Potter, Matthew E; Raja, Robert
2017-12-02
At the forefront of global development, the chemical industry is being confronted by a growing demand for products and services, but also the need to provide these in a manner that is sustainable in the long-term. In facing this challenge, the industry is being revolutionised by advances in catalysis that allow chemical transformations to be performed in a more efficient and economical manner. To this end, molecular design, facilitated by detailed theoretical and empirical studies, has played a pivotal role in creating highly-active and selective heterogeneous catalysts. In this review, the industrially-relevant Beckmann rearrangement is presented as an exemplar of how judicious characterisation and ab initio experiments can be used to understand and optimise nanoporous materials for sustainable catalysis.
NASA Astrophysics Data System (ADS)
Prieur, Anne; Bonnet, Jean-François; Combarnous, Michel
2004-11-01
The role of forest ecosystems in the regulation of greenhouse effect at the global scale is developed here, from two points of view, sometimes considered as opposed: carbon storage and wood production for energy. A nomenclature is proposed to understand all the various mechanisms implied in carbon storage. A comparison is made between the effects on carbon emissions of storage alone and storage with wood fuel production. Use of wood energy is proved to be a 'bonus' that could optimise, in the middle and long terms, the use of fossil fuel reserves. To cite this article: A. Prieur et al., C. R. Geoscience 336 (2004).
H∞ control problem of linear periodic piecewise time-delay systems
NASA Astrophysics Data System (ADS)
Xie, Xiaochen; Lam, James; Li, Panshuo
2018-04-01
This paper investigates the H∞ control problem based on exponential stability and weighted L2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L2-gain analyses, sufficient delay-dependent exponential stability and weighted L2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H∞ control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.
Li, Jinyan; Fong, Simon; Sung, Yunsick; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K L
2016-01-01
An imbalanced dataset is defined as a training dataset that has imbalanced proportions of data in both interesting and uninteresting classes. Often in biomedical applications, samples from the stimulating class are rare in a population, such as medical anomalies, positive clinical tests, and particular diseases. Although the target samples in the primitive dataset are small in number, the induction of a classification model over such training data leads to poor prediction performance due to insufficient training from the minority class. In this paper, we use a novel class-balancing method named adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique (ASCB_DmSMOTE) to solve this imbalanced dataset problem, which is common in biomedical applications. The proposed method combines under-sampling and over-sampling into a swarm optimisation algorithm. It adaptively selects suitable parameters for the rebalancing algorithm to find the best solution. Compared with the other versions of the SMOTE algorithm, significant improvements, which include higher accuracy and credibility, are observed with ASCB_DmSMOTE. Our proposed method tactfully combines two rebalancing techniques together. It reasonably re-allocates the majority class in the details and dynamically optimises the two parameters of SMOTE to synthesise a reasonable scale of minority class for each clustered sub-imbalanced dataset. The proposed methods ultimately overcome other conventional methods and attains higher credibility with even greater accuracy of the classification model.
Is ICRP guidance on the use of reference levels consistent?
Hedemann-Jensen, Per; McEwan, Andrew C
2011-12-01
In ICRP 103, which has replaced ICRP 60, it is stated that no fundamental changes have been introduced compared with ICRP 60. This is true except that the application of reference levels in emergency and existing exposure situations seems to be applied inconsistently, and also in the related publications ICRP 109 and ICRP 111. ICRP 103 emphasises that focus should be on the residual doses after the implementation of protection strategies in emergency and existing exposure situations. If possible, the result of an optimised protection strategy should bring the residual dose below the reference level. Thus the reference level represents the maximum acceptable residual dose after an optimised protection strategy has been implemented. It is not an 'off-the-shelf item' that can be set free of the prevailing situation. It should be determined as part of the process of optimising the protection strategy. If not, protection would be sub-optimised. However, in ICRP 103 some inconsistent concepts have been introduced, e.g. in paragraph 279 which states: 'All exposures above or below the reference level should be subject to optimisation of protection, and particular attention should be given to exposures above the reference level'. If, in fact, all exposures above and below reference levels are subject to the process of optimisation, reference levels appear superfluous. It could be considered that if optimisation of protection below a fixed reference level is necessary, then the reference level has been set too high at the outset. Up until the last phase of the preparation of ICRP 103 the concept of a dose constraint was recommended to constrain the optimisation of protection in all types of exposure situations. In the final phase, the term 'dose constraint' was changed to 'reference level' for emergency and existing exposure situations. However, it seems as if in ICRP 103 it was not fully recognised that dose constraints and reference levels are conceptually different. The use of reference levels in radiological protection is reviewed. It is concluded that the recommendations in ICRP 103 and related ICRP publications seem to be inconsistent regarding the use of reference levels in existing and emergency exposure situations.
Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.
Smith, J E
2012-01-01
Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes outperform global reward schemes in combinatorial spaces, unlike in continuous spaces. An analysis of evolving meme behaviour is used to explain these findings.
Optimisation and establishment of diagnostic reference levels in paediatric plain radiography
NASA Astrophysics Data System (ADS)
Paulo, Graciano do Nascimento Nobre
Purpose: This study aimed to propose Diagnostic Reference Levels (DRLs) in paediatric plain radiography and to optimise the most frequent paediatric plain radiography examinations in Portugal following an analysis and evaluation of current practice. Methods and materials: Anthropometric data (weight, patient height and thickness of the irradiated anatomy) was collected from 9,935 patients referred for a radiography procedure to one of the three dedicated paediatric hospitals in Portugal. National DRLs were calculated for the three most frequent X-ray procedures at the three hospitals: chest AP/PA projection; abdomen AP projection; pelvis AP projection. Exposure factors and patient dose were collected prospectively at the clinical sites. In order to analyse the relationship between exposure factors, the use of technical features and dose, experimental tests were made using two anthropomorphic phantoms: a) CIRSTM ATOM model 705; height: 110cm, weight: 19kg and b) Kyoto kagakuTM model PBU-60; height: 165cm, weight: 50kg. After phantom data collection, an objective image analysis was performed by analysing the variation of the mean value of the standard deviation, measured with OsiriX software (Pixmeo, Switzerland). After proposing new exposure criteria, a Visual Grading Characteristic image quality evaluation was performed blindly by four paediatric radiologists, each with a minimum of 10 years of professional experience, using anatomical criteria scoring. Results: DRLs by patient weight groups have been established for the first time. ESAKP75 DRLs for both patient age and weight groups were also obtained and are described in the thesis. Significant dose reduction was achieved through the implementation of an optimisation programme: an average reduction of 41% and 18% on KAPP75 and ESAKP75, respectively for chest plain radiography; an average reduction of 58% and 53% on KAPP75 and ESAKP75, respectively for abdomen plain radiography; and an average reduction of 47% and 48% on KAPP75 and ESAKP75, respectively for pelvis plain radiography. Conclusion: Portuguese DRLs for plain radiography were obtained for paediatric plain radiography (chest AP/PA, abdomen and pelvis). Experimental phantom tests identified adequate plain radiography exposure criteria, validated by objective and subjective image quality analysis. The new exposure criteria were put into practice in one of the paediatric hospitals, by introducing an optimisation programme. The implementation of the optimisation programme allowed a significant dose reduction to paediatric patients, without compromising image quality. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Reese, D. R.; Chaplin, W. J.; Davies, G. R.; Miglio, A.; Antia, H. M.; Ball, W. H.; Basu, S.; Buldgen, G.; Christensen-Dalsgaard, J.; Coelho, H. R.; Hekker, S.; Houdek, G.; Lebreton, Y.; Mazumdar, A.; Metcalfe, T. S.; Silva Aguirre, V.; Stello, D.; Verma, K.
2016-07-01
Context. Detailed oscillation spectra comprising individual frequencies for numerous solar-type stars and red giants are either currently available, e.g. courtesy of the CoRoT, Kepler, and K2 missions, or will become available with the upcoming NASA TESS and ESA PLATO 2.0 missions. The data can lead to a precise characterisation of these stars thereby improving our understanding of stellar evolution, exoplanetary systems, and the history of our galaxy. Aims: Our goal is to test and compare different methods for obtaining stellar properties from oscillation frequencies and spectroscopic constraints. Specifically, we would like to evaluate the accuracy of the results and reliability of the associated error bars, and to see where there is room for improvement. Methods: In the context of the SpaceInn network, we carried out a hare-and-hounds exercise in which one group, the hares, simulated observations of oscillation spectra for a set of ten artificial solar-type stars, and a number of hounds applied various methods for characterising these stars based on the data produced by the hares. Most of the hounds fell into two main groups. The first group used forward modelling (I.e. applied various search/optimisation algorithms in a stellar parameter space) whereas the second group relied on acoustic glitch signatures. Results: Results based on the forward modelling approach were accurate to 1.5% (radius), 3.9% (mass), 23% (age), 1.5% (surface gravity), and 1.8% (mean density), as based on the root mean square difference. Individual hounds reached different degrees of accuracy, some of which were substantially better than the above average values. For the two 1M⊙ stellar targets, the accuracy on the age is better than 10% thereby satisfying the requirements for the PLATO 2.0 mission. High stellar masses and atomic diffusion (which in our models does not include the effects of radiative accelerations) proved to be sources of difficulty. The average accuracies for the acoustic radii of the base of the convection zone, the He II ionisation, and the Γ1 peak located between the two He ionisation zones were 17%, 2.4%, and 1.9%, respectively. The results from the forward modelling were on average more accurate than those from the glitch fitting analysis as the latter seemed to be affected by aliasing problems for some of the targets. Conclusions: Our study indicates that forward modelling is the most accurate way of interpreting the pulsation spectra of solar-type stars. However, given its model-dependent nature, this method needs to be complemented by model-independent results from, e.g. glitch analysis. Furthermore, our results indicate that global rather than local optimisation algorithms should be used in order to obtain robust error bars.
Ferhi, Sabrina; Bourdat-Deschamps, Marjolaine; Daudin, Jean-Jacques; Houot, Sabine; Nélieu, Sylvie
2016-09-01
Pharmaceuticals can enter the environment when organic waste products are recycled on agricultural soils. The extraction of pharmaceuticals is a challenging step in their analysis. The very different extraction conditions proposed in the literature make the choice of the right method for multi-residue analysis difficult. This study aimed at evaluating, with experimental design methodology, the influence of the nature, pH and composition of the extraction medium on the extraction recovery of 14 pharmaceuticals, including 8 antibiotics, from soil and sewage sludge. Preliminary experimental designs showed that acetonitrile and citrate-phosphate buffer were the best extractants. Then, a response surface design demonstrated that many cross-product and squared terms had significant effects, explaining the shapes of the response surfaces. It also allowed optimising the pharmaceutical recoveries in soil and sludge. The optimal conditions were interpreted considering the ionisation states of the compounds, their solubility in the extraction medium and their interactions with the solid matrix. To perform the analysis, a compromise was made for each matrix. After a QuEChERS purification, the samples were analysed by online SPE-UHPLC-MS-MS. Both methods were simple and economical. They were validated with the accuracy profile methodology for soil and sludge and characterised for another type of soil, digested sludge and composted sludge. Trueness globally ranged between 80 and 120 % recovery, and inter- and intra-day precisions were globally below 20 % relative standard deviation. Various pharmaceuticals were present in environmental samples, with concentration levels ranging from a few micrograms per kilogramme up to thousands of micrograms per kilogramme. Graphical abstract Influence of the extraction medium on the extraction recovery of 14 pharmaceuticals. Influence of the ionisation state, the solubility and the interactions of pharmaceuticals with solid matrix. Analysis of different soils and organic waste products.
Wang, Candice; Huang, Chin-Chou; Lin, Shing-Jong; Chen, Jaw-Wen
2016-01-01
Objectives The goal of our study was to shed light on educational methods to strengthen medical students' cardiopulmonary resuscitation (CPR) leadership and team skills in order to optimise CPR understanding and success using didactic videos and high-fidelity simulations. Design An observational study. Setting A tertiary medical centre in Northern Taiwan. Participants A total of 104 5–7th year medical students, including 72 men and 32 women. Interventions We provided the medical students with a 2-hour training session on advanced CPR. During each class, we divided the students into 1–2 groups; each group consisted of 4–6 team members. Medical student teams were trained by using either method A or B. Method A started with an instructional CPR video followed by a first CPR simulation. Method B started with a first CPR simulation followed by an instructional CPR video. All students then participated in a second CPR simulation. Outcome measures Student teams were assessed with checklist rating scores in leadership, teamwork and team member skills, global rating scores by an attending physician and video-recording evaluation by 2 independent individuals. Results The 104 medical students were divided into 22 teams. We trained 11 teams using method A and 11 using method B. Total second CPR simulation scores were significantly higher than first CPR simulation scores in leadership (p<0.001), teamwork (p<0.001) and team member skills (p<0.001). For methods A and B students' first CPR simulation scores were similar, but method A students' second CPR simulation scores were significantly higher than those of method B in leadership skills (p=0.034), specifically in the support subcategory (p=0.049). Conclusions Although both teaching strategies improved leadership, teamwork and team member performance, video exposure followed by CPR simulation further increased students' leadership skills compared with CPR simulation followed by video exposure. PMID:27678539
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.
Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir
2014-01-01
Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Heck, V.
2014-09-01
Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
High resolution global climate modelling; the UPSCALE project, a large simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.
2014-01-01
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environmental Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the high performance computing center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE dataset. This dataset is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.
High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.
2014-08-01
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environment Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.
De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W
2017-12-01
Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was 0.82. The overall performance of the presented automated pilot technique to stage lower third molar development on panoramic radiographs was similar to staging by human observers. It will be further optimised in future research, since it represents a necessary step to achieve a fully automated dental age estimation method, which to date is not available.
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
Adjunctive minocycline treatment for major depressive disorder: A proof of concept trial.
Dean, Olivia M; Kanchanatawan, Buranee; Ashton, Melanie; Mohebbi, Mohammadreza; Ng, Chee Hong; Maes, Michael; Berk, Lesley; Sughondhabirom, Atapol; Tangwongchai, Sookjaroen; Singh, Ajeet B; McKenzie, Helen; Smith, Deidre J; Malhi, Gin S; Dowling, Nathan; Berk, Michael
2017-08-01
Conventional antidepressant treatments result in symptom remission in 30% of those treated for major depressive disorder, raising the need for effective adjunctive therapies. Inflammation has an established role in the pathophysiology of major depressive disorder, and minocycline has been shown to modify the immune-inflammatory processes and also reduce oxidative stress and promote neuronal growth. This double-blind, randomised, placebo-controlled trial examined adjunctive minocycline (200 mg/day, in addition to treatment as usual) for major depressive disorder. This double-blind, randomised, placebo-controlled trial investigated 200 mg/day adjunctive minocycline (in addition to treatment as usual) for major depressive disorder. A total of 71 adults with major depressive disorder ( Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition) were randomised to this 12-week trial. Outcome measures included the Montgomery-Asberg Depression Rating Scale (primary outcome), Clinical Global Impression-Improvement and Clinical Global Impression-Severity, Hamilton Anxiety Rating Scale, Quality of Life Enjoyment and Satisfaction Questionnaire, Social and Occupational Functioning Scale and the Range of Impaired Functioning Tool. The study was registered on the Australian and New Zealand Clinical Trials Register: www.anzctr.org.au , #ACTRN12612000283875. Based on mixed-methods repeated measures analysis of variance at week 12, there was no significant difference in Montgomery-Asberg Depression Rating Scale scores between groups. However, there were significant differences, favouring the minocycline group at week 12 for Clinical Global Impression-Improvement score - effect size (95% confidence interval) = -0.62 [-1.8, -0.3], p = 0.02; Quality of Life Enjoyment and Satisfaction Questionnaire score - effect size (confidence interval) = -0.12 [0.0, 0.2], p < 0.001; and Social and Occupational Functioning Scale and the Range of Impaired Functioning Tool score - 0.79 [-4.5, -1.4], p < 0.001. These effects remained at follow-up (week 16), and Patient Global Impression also became significant, effect size (confidence interval) = 0.57 [-1.7, -0.4], p = 0.017. While the primary outcome was not significant, the improvements in other comprehensive clinical measures suggest that minocycline may be a useful adjunct to improve global experience, functioning and quality of life in people with major depressive disorder. Further studies are warranted to confirm the potential of this accessible agent to optimise treatment outcomes.
Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models
Prinold, Joe A.I.; Bull, Anthony M.J.
2014-01-01
Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621
Sheridan, Juliette; Coe, Carol Ann; Doran, Peter; Egan, Laurence; Cullen, Garret; Kevans, David; Leyden, Jan; Galligan, Marie; O’Toole, Aoibhlinn; McCarthy, Jane; Doherty, Glen
2018-01-01
Introduction Ulcerative colitis (UC) is a chronic inflammatory bowel disease (IBD), often leading to an impaired quality of life in affected patients. Current treatment modalities include antitumour necrosis factor (anti-TNF) monoclonal antibodies (mABs) including infliximab, adalimumab and golimumab (GLM). Several recent retrospective and prospective studies have demonstrated that fixed dosing schedules of anti-TNF agents often fails to consistently achieve adequate circulating therapeutic drug levels (DL) with consequent risk of immunogenicity treatment failure and potential risk of hospitalisation and colectomy in patients with UC. The design of GLM dose Optimisation to Adequate Levels to Achieve Response in Colitis aims to address the impact of dose escalation of GLM immediately following induction and during the subsequent maintenance phase in response to suboptimal DL or persisting inflammatory burden as represented by raised faecal calprotectin (FCP). Aim The primary aim of the study is to ascertain if monitoring of FCP and DL of GLM to guide dose optimisation (during maintenance) improves rates of patient continuous clinical response and reduces disease activity in UC. Methods and analysis A randomised, multicentred two-arm trial studying the effect of dose optimisation of GLM based on FCP and DL versus treatment as per SMPC. Eligible patients will be randomised in a 1:1 ratio to 1 of 2 treatment groups and shall be treated over a period of 46 weeks. Ethics and dissemination The study protocol was approved by the Research Ethics committee of St. Vincent’s University Hospital. The results will be published in a peer-reviewed journal and shared with the worldwide medical community. Trial registration numbers EudraCT number: 2015-004724-62; Clinicaltrials.gov Identifier: NCT0268772; Pre-results. PMID:29379609
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
On the dynamic rounding-off in analogue and RF optimal circuit sizing
NASA Astrophysics Data System (ADS)
Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena
2014-04-01
Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Optimisation of the Management of Higher Activity Waste in the UK - 13537
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Ciara; Buckley, Matthew
2013-07-01
The Upstream Optioneering project was created in the Nuclear Decommissioning Authority (UK) to support the development and implementation of significant opportunities to optimise activities across all the phases of the Higher Activity Waste management life cycle (i.e. retrieval, characterisation, conditioning, packaging, storage, transport and disposal). The objective of the Upstream Optioneering project is to work in conjunction with other functions within NDA and the waste producers to identify and deliver solutions to optimise the management of higher activity waste. Historically, optimisation may have occurred on aspects of the waste life cycle (considered here to include retrieval, conditioning, treatment, packaging, interimmore » storage, transport to final end state, which may be geological disposal). By considering the waste life cycle as a whole, critical analysis of assumed constraints may lead to cost savings for the UK Tax Payer. For example, it may be possible to challenge the requirements for packaging wastes for disposal to deliver an optimised waste life cycle. It is likely that the challenges faced in the UK are shared in other countries. It is therefore likely that the opportunities identified may also apply elsewhere, with the potential for sharing information to enable value to be shared. (authors)« less
Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA
NASA Astrophysics Data System (ADS)
Chandra, Abhijit; Chattopadhyay, Sudipta
2015-01-01
In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.
Optimisation of the supercritical extraction of toxic elements in fish oil.
Hajeb, P; Jinap, S; Shakibazadeh, Sh; Afsah-Hejri, L; Mohebbi, G H; Zaidul, I S M
2014-01-01
This study aims to optimise the operating conditions for the supercritical fluid extraction (SFE) of toxic elements from fish oil. The SFE operating parameters of pressure, temperature, CO2 flow rate and extraction time were optimised using a central composite design (CCD) of response surface methodology (RSM). High coefficients of determination (R²) (0.897-0.988) for the predicted response surface models confirmed a satisfactory adjustment of the polynomial regression models with the operation conditions. The results showed that the linear and quadratic terms of pressure and temperature were the most significant (p < 0.05) variables affecting the overall responses. The optimum conditions for the simultaneous elimination of toxic elements comprised a pressure of 61 MPa, a temperature of 39.8ºC, a CO₂ flow rate of 3.7 ml min⁻¹ and an extraction time of 4 h. These optimised SFE conditions were able to produce fish oil with the contents of lead, cadmium, arsenic and mercury reduced by up to 98.3%, 96.1%, 94.9% and 93.7%, respectively. The fish oil extracted under the optimised SFE operating conditions was of good quality in terms of its fatty acid constituents.
Four Scenarios for Determining the Size and Reusability of Learning Objects
ERIC Educational Resources Information Center
Schoonenboom, Judith
2012-01-01
The best method for determining the size of learning objects (LOs) so as to optimise their reusability has been a topic of debate for years now. Although there appears to be agreement on basic assumptions, developed guidelines and principles are often in conflict. This study shows that this confusion stems from the fact that in the literature,…
Kagkadis, K A; Rekkas, D M; Dallas, P P; Choulis, N H
1996-01-01
In this study a complex of Ibuprofen and b-Hydroxypropylcyclodextrin was prepared employing a freeze drying method. The production parameters and the final specifications of this product were optimized by using response surface methodology. The results show that the freeze dried complex meets the requirements for solubility to be considered as a possible injectable form.