Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
NASA Astrophysics Data System (ADS)
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
An improved design method based on polyphase components for digital FIR filters
NASA Astrophysics Data System (ADS)
Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No
2017-11-01
This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
On the analysis of using 3-coil wireless power transfer system in retinal prosthesis.
Bai, Shun; Skafidas, Stan
2014-01-01
Designing a wireless power transmission system(WPTS) using inductive coupling has been investigated extensively in the last decade. Depending on the different configurations of the coupling system, there have been various designing methods to optimise the power transmission efficiency based on the tuning circuitry, quality factor optimisation and geometrical configuration. Recently, a 3-coil WPTS was introduced in retinal prosthesis to overcome the low power transferring efficiency due to low coupling coefficient. Here we present a method to analyse this 3-coil WPTS using the S-parameters to directly obtain maximum achievable power transferring efficiency. Through electromagnetic simulation, we brought a question on the condition of improvement using 3-coil WPTS in powering retinal prosthesis.
McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin
2007-05-09
A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
Efficient methods for enol phosphate synthesis using carbon-centred magnesium bases.
Kerr, William J; Lindsay, David M; Patel, Vipulkumar K; Rajamanickam, Muralikrishnan
2015-10-28
Efficient conversion of ketones into kinetic enol phosphates under mild and accessible conditions has been realised using the developed methods with di-tert-butylmagnesium and bismesitylmagnesium. Optimisation of the quench protocol resulted in high yields of enol phosphates from a range of cyclohexanones and aryl methyl ketones, with tolerance of a range of additional functional units.
Multiobjective optimisation of bogie suspension to boost speed on curves
NASA Astrophysics Data System (ADS)
Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor
2016-01-01
To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Implementation study of wearable sensors for activity recognition systems.
Rezaie, Hamed; Ghassemian, Mona
2015-08-01
This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely 'stream-based', 'feature-based' and 'threshold-based' scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
NASA Astrophysics Data System (ADS)
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-23
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-01-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585
Sterckx, Femke L; Saison, Daan; Delvaux, Freddy R
2010-08-31
Monophenols are widely spread compounds contributing to the flavour of many foods and beverages. They are most likely present in beer, but so far, little is known about their influence on beer flavour. To quantify these monophenols in beer, we optimised a headspace solid-phase microextraction method coupled to gas chromatography-mass spectrometry. To improve their isolation from the beer matrix and their chromatographic properties, the monophenols were acetylated using acetic anhydride and KHCO(3) as derivatising agent and base catalyst, respectively. Derivatisation conditions were optimised with attention for the pH of the reaction medium. Additionally, different parameters affecting extraction efficiency were optimised, including fibre coating, extraction time and temperature and salt addition. Afterwards, we calibrated and validated the method successfully and applied it for the analysis of monophenols in beer samples. 2010 Elsevier B.V. All rights reserved.
Power law-based local search in spider monkey optimisation for lower order system modelling
NASA Astrophysics Data System (ADS)
Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala
2017-01-01
The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.
NASA Astrophysics Data System (ADS)
Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian
2017-08-01
With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
NASA Astrophysics Data System (ADS)
Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei
2015-10-01
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozhdestvensky, Yu V
The possibility is studied for obtaining intense cold atomic beams by using the Renyi entropy to optimise the laser cooling process. It is shown in the case of a Gaussian velocity distribution of atoms, the Renyi entropy coincides with the density of particles in the phase space. The optimisation procedure for cooling atoms by resonance optical radiation is described, which is based on the thermodynamic law of increasing the Renyi entropy in time. Our method is compared with the known methods for increasing the laser cooling efficiency such as the tuning of a laser frequency in time and a changemore » of the atomic transition frequency in an inhomogeneous transverse field of a magnetic solenoid. (laser cooling)« less
Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks
NASA Astrophysics Data System (ADS)
Yang, Chao; Fu, Yuli; Yang, Junjie
2016-07-01
Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.
Nevado, Juan José Berzas; Robledo, Virginia Rodríguez; Callado, Carolina Sánchez-Carnerero
2012-07-15
The enrichment of virgin olive oil (VOO) with natural antioxidants contained in various herbs (rosemary, thyme and oregano) was studied. Three different enrichment procedures were used for the solid-liquid extraction of antioxidants present in the herbs to VOO. One involved simply bringing the herbs into contact with the VOO for 190 days; another keeping the herb-VOO mixture under stirring at room temperature (25°C) for 11 days; and the third stirring at temperatures above room level (35-40°C). The efficiency of each procedure was assessed by using a reproducible, efficient, reliable analytical capillary zone electrophoresis (CZE) method to separate and determine selected phenolic compounds (rosmarinic and caffeic acid) in the oil. Prior to electrophoretic separation, the studied antioxidants were isolated from the VOO matrix by using an optimised preconcentration procedure based on solid phase extraction (SPE). The CZE method was optimised and validated. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sybil--efficient constraint-based modelling in R.
Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J
2013-11-13
Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
Modulation aware cluster size optimisation in wireless sensor networks
NASA Astrophysics Data System (ADS)
Sriram Naik, M.; Kumar, Vinay
2017-07-01
Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.
Morton, Katherine; Band, Rebecca; van Woezik, Anne; Grist, Rebecca; McManus, Richard J.; Little, Paul; Yardley, Lucy
2018-01-01
Background For behaviour-change interventions to be successful they must be acceptable to users and overcome barriers to behaviour change. The Person-Based Approach can help to optimise interventions to maximise acceptability and engagement. This article presents a novel, efficient and systematic method that can be used as part of the Person-Based Approach to rapidly analyse data from development studies to inform intervention modifications. We describe how we used this approach to optimise a digital intervention for patients with hypertension (HOME BP), which aims to implement medication and lifestyle changes to optimise blood pressure control. Methods In study 1, hypertensive patients (N = 12) each participated in three think-aloud interviews, providing feedback on a prototype of HOME BP. In study 2 patients (N = 11) used HOME BP for three weeks and were then interviewed about their experiences. Studies 1 and 2 were used to identify detailed changes to the intervention content and potential barriers to engagement with HOME BP. In study 3 (N = 7) we interviewed hypertensive patients who were not interested in using an intervention like HOME BP to identify potential barriers to uptake, which informed modifications to our recruitment materials. Analysis in all three studies involved detailed tabulation of patient data and comparison to our modification criteria. Results Studies 1 and 2 indicated that the HOME BP procedures were generally viewed as acceptable and feasible, but also highlighted concerns about monitoring blood pressure correctly at home and making medication changes remotely. Patients in study 3 had additional concerns about the safety and security of the intervention. Modifications improved the acceptability of the intervention and recruitment materials. Conclusions This paper provides a detailed illustration of how to use the Person-Based Approach to refine a digital intervention for hypertension. The novel, efficient approach to analysis and criteria for deciding when to implement intervention modifications described here may be useful to others developing interventions. PMID:29723262
Coil optimisation for transcranial magnetic stimulation in realistic head geometry.
Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J
Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
NASA Astrophysics Data System (ADS)
Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.
2015-10-01
In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
Dong, Xu-Yan; Kong, Fan-Pi; Yuan, Gang-You; Wei, Fang; Jiang, Mu-Lan; Li, Guang-Ming; Wang, Zhan; Zhao, Yuan-Di; Chen, Hong
2012-01-01
Phytosterol liposomes were prepared using the thin film method and used to encapsulate nattokinase (NK). In order to obtain a high encapsulation efficiency within the liposome, an orthogonal experiment (L9 (3)(4)) was applied to optimise the preparation conditions. The molar ratio of lecithin to phytosterols, NK activity and mass ratio of mannite to lecithin were the main factors that influenced the encapsulation efficiency of the liposomes. Based on the results of a single-factor test, these three factors were chosen for this study. We determined the optimum extraction conditions to be as follows: a molar ratio of lecithin to phytosterol of 2 : 1, NK activity of 2500 U mL⁻¹ and a mass ratio of mannite to lecithin of 3 : 1. Under these optimised conditions, an encapsulation efficiency of 65.25% was achieved, which agreed closely with the predicted result. Moreover, the zeta potential, size distribution and microstructure of the liposomes prepared were measured, and we found that the zeta potential was -51 ± 3 mV and the mean diameter was 194.1 nm. From the results of the scanning electron microscopy, we observed that the phytosterol liposomes were round and regular in shape and showed no aggregation.
NASA Astrophysics Data System (ADS)
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.
2016-06-01
Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Liu, Qiang; Wang, Chengen
2012-08-01
Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.
A novel swarm intelligence algorithm for finding DNA motifs.
Lei, Chengwei; Ruan, Jianhua
2009-01-01
Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.
Sun, Jingcan; Yu, Bin; Curran, Philip; Liu, Shao-Quan
2012-12-15
Coconut cream and fusel oil, two low-cost natural substances, were used as starting materials for the biosynthesis of flavour-active octanoic acid esters (ethyl-, butyl-, isobutyl- and (iso)amyl octanoate) using lipase Palatase as the biocatalyst. The Taguchi design method was used for the first time to optimize the biosynthesis of esters by a lipase in an aqueous system of coconut cream and fusel oil. Temperature, time and enzyme amount were found to be statistically significant factors and the optimal conditions were determined to be as follows: temperature 30°C, fusel oil concentration 9% (v/w), reaction time 24h, pH 6.2 and enzyme amount 0.26 g. Under the optimised conditions, a yield of 14.25mg/g (based on cream weight) and signal-to-noise (S/N) ratio of 23.07 dB were obtained. The results indicate that the Taguchi design method was an efficient and systematic approach to the optimisation of lipase-catalysed biological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen
2017-01-01
Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766
Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring
NASA Astrophysics Data System (ADS)
Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.
2014-12-01
Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.
Vibration isolation design for periodically stiffened shells by the wave finite element method
NASA Astrophysics Data System (ADS)
Hong, Jie; He, Xueqing; Zhang, Dayi; Zhang, Bing; Ma, Yanhong
2018-04-01
Periodically stiffened shell structures are widely used due to their excellent specific strength, in particular for aeronautical and astronautical components. This paper presents an improved Wave Finite Element Method (FEM) that can be employed to predict the band-gap characteristics of stiffened shell structures efficiently. An aero-engine casing, which is a typical periodically stiffened shell structure, was employed to verify the validation and efficiency of the Wave FEM. Good agreement has been found between the Wave FEM and the classical FEM for different boundary conditions. One effective wave selection method based on the Wave FEM has thus been put forward to filter the radial modes of a shell structure. Furthermore, an optimisation strategy by the combination of the Wave FEM and genetic algorithm was presented for periodically stiffened shell structures. The optimal out-of-plane band gap and the mass of the whole structure can be achieved by the optimisation strategy under an aerodynamic load. Results also indicate that geometric parameters of stiffeners can be properly selected that the out-of-plane vibration attenuates significantly in the frequency band of interest. This study can provide valuable references for designing the band gaps of vibration isolation.
NASA Astrophysics Data System (ADS)
Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue
2016-11-01
This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.
2012-04-01
For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.
Very high frame rate volumetric integration of depth images on mobile devices.
Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David
2015-11-01
Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.
Thermal buckling optimisation of composite plates using firefly algorithm
NASA Astrophysics Data System (ADS)
Kamarian, S.; Shakeri, M.; Yas, M. H.
2017-07-01
Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.
Reward-based spatial crowdsourcing with differential privacy preservation
NASA Astrophysics Data System (ADS)
Xiong, Ping; Zhang, Lefeng; Zhu, Tianqing
2017-11-01
In recent years, the popularity of mobile devices has transformed spatial crowdsourcing (SC) into a novel mode for performing complicated projects. Workers can perform tasks at specified locations in return for rewards offered by employers. Existing methods ensure the efficiency of their systems by submitting the workers' exact locations to a centralised server for task assignment, which can lead to privacy violations. Thus, implementing crowsourcing applications while preserving the privacy of workers' location is a key issue that needs to be tackled. We propose a reward-based SC method that achieves acceptable utility as measured by task assignment success rates, while efficiently preserving privacy. A differential privacy model ensures rigorous privacy guarantee, and Laplace noise is introduced to protect workers' exact locations. We then present a reward allocation mechanism that adjusts each piece of the reward for a task using the distribution of the workers' locations. Through experimental results, we demonstrate that this optimised-reward method is efficient for SC applications.
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.
Garnavi, Rahil; Aldeen, Mohammad; Bailey, James
2012-11-01
This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Devikanniga, D; Joshua Samuel Raj, R
2018-04-01
Osteoporosis is a life threatening disease which commonly affects women mostly after their menopause. It primarily causes mild bone fractures, which on advanced stage leads to the death of an individual. The diagnosis of osteoporosis is done based on bone mineral density (BMD) values obtained through various clinical methods experimented from various skeletal regions. The main objective of the authors' work is to develop a hybrid classifier model that discriminates the osteoporotic patient from healthy person, based on BMD values. In this Letter, the authors propose the monarch butterfly optimisation-based artificial neural network classifier which helps in earlier diagnosis and prevention of osteoporosis. The experiments were conducted using 10-fold cross-validation method for two datasets lumbar spine and femoral neck. The results were compared with other similar hybrid approaches. The proposed method resulted with the accuracy, specificity and sensitivity of 97.9% ± 0.14, 98.33% ± 0.03 and 95.24% ± 0.08, respectively, for lumbar spine dataset and 99.3% ± 0.16%, 99.2% ± 0.13 and 100, respectively, for femoral neck dataset. Further, its performance is compared using receiver operating characteristics analysis and Wilcoxon signed-rank test. The results proved that the proposed classifier is efficient and it outperformed the other approaches in all the cases.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat
2012-09-01
Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Efficient exploration of chemical space by fragment-based screening.
Hall, Richard J; Mortenson, Paul N; Murray, Christopher W
2014-01-01
Screening methods seek to sample a vast chemical space in order to identify starting points for further chemical optimisation. Fragment based drug discovery exploits the superior sampling of chemical space that can be achieved when the molecular weight is restricted. Here we show that commercially available fragment space is still relatively poorly sampled and argue for highly sensitive screening methods to allow the detection of smaller fragments. We analyse the properties of our fragment library versus the properties of X-ray hits derived from the library. We particularly consider properties related to the degree of planarity of the fragments. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Optimisation of olive oil phenol extraction conditions using a high-power probe ultrasonication.
Jerman Klen, T; Mozetič Vodopivec, B
2012-10-15
A new method of ultrasound probe assisted liquid-liquid extraction (US-LLE) combined with a freeze-based fat precipitation clean-up and HPLC-DAD-FLD-MS detection is described for extra virgin olive oil (EVOO) phenol analysis. Three extraction variables (solvent type; 100%, 80%, 50% methanol, sonication time; 5, 10, 20 min, extraction steps; 1-5) and two clean-up methods (n-hexane washing vs. low temperature fat precipitation) were studied and optimised with aim to maximise extracts' phenol recoveries. A three-step extraction of 10 min with pure methanol (5 mL) resulted in the highest phenol content of freeze-based defatted extracts (667 μg GAE g(-1)) from 10 g of EVOO, providing much higher efficiency (up to 68%) and repeatability (up to 51%) vs. its non-sonicated counterpart (LLE-agitation) and n-hexane washing. In addition, the overall method provided high linearity (r(2)≥0.97), precision (RSD: 0.4-9.3%) and sensitivity with LODs/LOQs ranging from 0.03 to 0.16 μg g(-1) and 0.10-0.51 μg g(-1) of EVOO, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
Guillaume, Y C; Peyrin, E
2000-03-06
A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.
Escalated convergent artificial bee colony
NASA Astrophysics Data System (ADS)
Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu
2016-03-01
Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
NASA Astrophysics Data System (ADS)
Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.
2018-02-01
The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-01-01
Purpose: The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Materials and Methods: Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 23 full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Results: Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug–excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. Conclusion: It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease. PMID:23580933
Optimisation of logistics processes of energy grass collection
NASA Astrophysics Data System (ADS)
Bányai, Tamás.
2010-05-01
The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu
Application of the adjoint optimisation of shock control bump for ONERA-M6 wing
NASA Astrophysics Data System (ADS)
Nejati, A.; Mazaheri, K.
2017-11-01
This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.
Ogungbenro, Kayode; Aarons, Leon
2011-08-01
In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
NASA Astrophysics Data System (ADS)
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
A shrinking hypersphere PSO for engineering optimisation problems
NASA Astrophysics Data System (ADS)
Yadav, Anupam; Deep, Kusum
2016-03-01
Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
NASA Astrophysics Data System (ADS)
Dubey, M.; Chandra, H.; Kumar, Anil
2016-02-01
A thermal modelling for the performance evaluation of gas turbine cogeneration system with reheat is presented in this paper. The Joule-Brayton cogeneration reheat cycle is based on the total useful energy rate (TUER) has been optimised and the efficiency at the maximum TUER is determined. The variation of maximum dimensionless TUER and efficiency at maximum TUER with respect to cycle temperature ratio have also been analysed. From the results, it has been found that the dimensionless maximum TUER and the corresponding thermal efficiency decrease with the increase in power to heat ratio. The result also shows that the inclusion of reheat significantly improves the overall performance of the cycle. From the thermodynamic performance point of view, this methodology may be quite useful in the selection and comparison of combined energy production systems.
Optimisation of intradermal DNA electrotransfer for immunisation.
Vandermeulen, Gaëlle; Staes, Edith; Vanderhaeghen, Marie Lise; Bureau, Michel Francis; Scherman, Daniel; Préat, Véronique
2007-12-04
The development of DNA vaccines requires appropriate delivery technologies. Electrotransfer is one of the most efficient methods of non-viral gene transfer. In the present study, intradermal DNA electrotransfer was first optimised. Strong effects of the injection method and the dose of DNA on luciferase expression were demonstrated. Pre-treatments were evaluated to enhance DNA diffusion in the skin but neither hyaluronidase injection nor iontophoresis improved efficiency of intradermal DNA electrotransfer. Then, DNA immunisation with a weakly immunogenic model antigen, luciferase, was investigated. After intradermal injection of the plasmid encoding luciferase, electrotransfer (HV 700 V/cm 100 micros, LV 200 V/cm 400 ms) was required to induce immune response. The response was Th1-shifted compared to immunisation with the luciferase recombinant protein. Finally, DNA electrotransfer in the skin, the muscle or the ear pinna was compared. Muscle DNA electrotransfer resulted in the highest luciferase expression and the best IgG response. Nevertheless electrotransfer into the skin, the muscle and the ear pinna all resulted in IFN-gamma secretion by luciferase-stimulated splenocytes suggesting that an efficient Th1 response was induced in all case.
Development and Analysis of New Integrated Energy Systems for Sustainable Buildings
NASA Astrophysics Data System (ADS)
Khalid, Farrukh
Excessive consumption of fossil fuels in the residential sector and their associated negative environmental impacts bring a significant challenge to engineers within research and industrial communities throughout the world to develop more environmentally benign methods of meeting energy needs of residential sector in particular. This thesis addresses potential solutions for the issue of fossils fuel consumption in residential buildings. Three novel renewable energy based multigeneration systems are proposed for different types of residential buildings, and a comprehensive assessment of energetic and exergetic performances is given on the basis of total occupancy, energy load, and climate conditions. System 1 is a multigeneration system based on two renewable energy sources. It uses biomass and solar resources. The outputs of System 1 are electricity, space heating, cooling, and hot water. The energy and exergy efficiencies of System 1 are 91.0% and 34.9%, respectively. The results of the optimisation analysis show that the net present cost of System 1 is 2,700,496 and that the levelised cost of electricity is 0.117/kWh. System 2 is a multigeneration system, integrating three renewable energy based subsystems; wind turbine, concentrated solar collector, and Organic Rankine Cycle supplied by a ground source heat exchanger. The outputs of the System 2 are electricity, hot water, heating and cooling. The optimisation analysis shows that net present cost is 35,502 and levelised cost of electricity is 0.186/kWh. The energy and exergy efficiencies of System 2 are found to be 34.6% and 16.2%, respectively. System 3 is a multigeneration system, comprising two renewable energy subsystems-- geothermal and solar to supply power, cooling, heating, and hot water. The optimisation analysis shows that the net present cost of System 3 is 598,474, and levelised cost of electricity of 0.111/kWh. The energy and exergy efficiencies of System 3 are 20.2% and 19.2%, respectively, with outputs of electricity, hot water, cooling and space heating. A performance assessment for identical conditions indicates that System 3 offers the best performance, with the minimum net present cost of 26,001 and levelised cost of electricity of 0.136/kWh.
NASA Astrophysics Data System (ADS)
Kuz'mina, M. S.; Khazanov, E. A.
2015-05-01
We consider the methods for enhancing the temporal contrast of super-high-power laser pulses, based on the conversion of radiation polarisation in a medium with cubic nonlinearity. For a medium with weak birefringence and isotropic nonlinearity, we propose a new scheme to enhance the temporal contrast. For a medium with anisotropic nonlinearity, the efficiency of the temporal contrast optimisation is shown to depend not only on the spatial orientation of the crystal and B-integral, but also on the type of the crystal lattice symmetry.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.
2017-11-01
Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.
Biomass supply chain optimisation for Organosolv-based biorefineries.
Giarola, Sara; Patel, Mayank; Shah, Nilay
2014-05-01
This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning
Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...
2016-04-26
A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.
NASA Astrophysics Data System (ADS)
El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali
2015-09-01
The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.
Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C
2018-06-01
Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.
Semantic distance as a critical factor in icon design for in-car infotainment systems.
Silvennoinen, Johanna M; Kujala, Tuomo; Jokinen, Jussi P P
2017-11-01
In-car infotainment systems require icons that enable fluent cognitive information processing and safe interaction while driving. An important issue is how to find an optimised set of icons for different functions in terms of semantic distance. In an optimised icon set, every icon needs to be semantically as close as possible to the function it visually represents and semantically as far as possible from the other functions represented concurrently. In three experiments (N = 21 each), semantic distances of 19 icons to four menu functions were studied with preference rankings, verbal protocols, and the primed product comparisons method. The results show that the primed product comparisons method can be efficiently utilised for finding an optimised set of icons for time-critical applications out of a larger set of icons. The findings indicate the benefits of the novel methodological perspective into the icon design for safety-critical contexts in general. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Kumaravel
2017-02-01
The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
A domain specific language for performance portable molecular dynamics algorithms
NASA Astrophysics Data System (ADS)
Saunders, William Robert; Grant, James; Müller, Eike Hermann
2018-03-01
Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.
A target recognition method for maritime surveillance radars based on hybrid ensemble selection
NASA Astrophysics Data System (ADS)
Fan, Xueman; Hu, Shengliang; He, Jingbo
2017-11-01
In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522
NASA Astrophysics Data System (ADS)
Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.
2017-10-01
ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.
An improved PSO-SVM model for online recognition defects in eddy current testing
NASA Astrophysics Data System (ADS)
Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin
2013-12-01
Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.
Waterfall, C M; Cobb, B D
2001-12-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.
NASA Astrophysics Data System (ADS)
Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng
2018-04-01
It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.
Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A
2014-11-01
In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.
NASA Astrophysics Data System (ADS)
Bell, L. R.; Dowling, J. A.; Pogson, E. M.; Metcalfe, P.; Holloway, L.
2017-01-01
Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes.
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter
2007-01-01
A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.
NASA Astrophysics Data System (ADS)
Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson
2018-06-01
Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.
Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P
2008-11-30
In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.
Xiao, Fuyuan; Aritsugi, Masayoshi; Wang, Qing; Zhang, Rong
2016-09-01
For efficient and sophisticated analysis of complex event patterns that appear in streams of big data from health care information systems and support for decision-making, a triaxial hierarchical model is proposed in this paper. Our triaxial hierarchical model is developed by focusing on hierarchies among nested event pattern queries with an event concept hierarchy, thereby allowing us to identify the relationships among the expressions and sub-expressions of the queries extensively. We devise a cost-based heuristic by means of the triaxial hierarchical model to find an optimised query execution plan in terms of the costs of both the operators and the communications between them. According to the triaxial hierarchical model, we can also calculate how to reuse the results of the common sub-expressions in multiple queries. By integrating the optimised query execution plan with the reuse schemes, a multi-query optimisation strategy is developed to accomplish efficient processing of multiple nested event pattern queries. We present empirical studies in which the performance of multi-query optimisation strategy was examined under various stream input rates and workloads. Specifically, the workloads of pattern queries can be used for supporting monitoring patients' conditions. On the other hand, experiments with varying input rates of streams can correspond to changes of the numbers of patients that a system should manage, whereas burst input rates can correspond to changes of rushes of patients to be taken care of. The experimental results have shown that, in Workload 1, our proposal can improve about 4 and 2 times throughput comparing with the relative works, respectively; in Workload 2, our proposal can improve about 3 and 2 times throughput comparing with the relative works, respectively; in Workload 3, our proposal can improve about 6 times throughput comparing with the relative work. The experimental results demonstrated that our proposal was able to process complex queries efficiently which can support health information systems and further decision-making. Copyright © 2016 Elsevier B.V. All rights reserved.
Adham, Manal T; Bentley, Peter J
2016-08-01
This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis
Waterfall, Christy M.; Cobb, Benjamin D.
2001-01-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702
Giri, Anupam; Zelinkova, Zuzana; Wenzl, Thomas
2017-12-01
For the implementation of Regulation (EC) No 2065/2003 related to smoke flavourings used or intended for use in or on foods a method based on solid-phase micro extraction (SPME) GC/MS was developed for the characterisation of liquid smoke products. A statistically based experimental design (DoE) was used for method optimisation. The best general conditions to quantitatively analyse the liquid smoke compounds were obtained with a polydimethylsiloxane/divinylbenzene (PDMS/DVB) fibre, 60°C extraction temperature, 30 min extraction time, 250°C desorption temperature, 180 s desorption time, 15 s agitation time, and 250 rpm agitation speed. Under the optimised conditions, 119 wood pyrolysis products including furan/pyran derivatives, phenols, guaiacol, syringol, benzenediol, and their derivatives, cyclic ketones, and several other heterocyclic compounds were identified. The proposed method was repeatable (RSD% <5) and the calibration functions were linear for all compounds under study. Nine isotopically labelled internal standards were used for improving quantification of analytes by compensating matrix effects that might affect headspace equilibrium and extractability of compounds. The optimised isotope dilution SPME-GC/MS based analytical method proved to be fit for purpose, allowing the rapid identification and quantification of volatile compounds in liquid smoke flavourings.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262
NASA Astrophysics Data System (ADS)
van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan
2017-06-01
Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.
Honeybee economics: optimisation of foraging in a variable world.
Stabentheiner, Anton; Kovac, Helmut
2016-06-20
In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.
Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata
2015-03-15
Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2018-02-01
To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Achieving optimal SERS through enhanced experimental design.
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston
2016-01-01
One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Milic, Vladimir; Kasac, Josip; Novakovic, Branko
2015-10-01
This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.
Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.
Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P
2017-01-01
To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.
Implementation study of wearable sensors for activity recognition systems
Ghassemian, Mona
2015-01-01
This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely ‘stream-based’, ‘feature-based’ and ‘threshold-based’ scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency. PMID:26609413
NASA Astrophysics Data System (ADS)
Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.
2016-04-01
The transfer of liquid helium (LHe) into mobile dewars or transport vessels is a common and unavoidable process at LHe decant stations. During this transfer reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus generated helium gas needs to be collected and reliquefied which requires a huge amount of electrical energy. Therefore, the design of transfer lines used at LHe decant stations has been optimised to establish a LHe transfer with minor evaporation losses which increases the overall efficiency and capacity of LHe decant stations. This paper presents the experimental results achieved during the thermohydraulic optimisation of a flexible LHe transfer line. An extensive measurement campaign with a set of dedicated transfer lines equipped with pressure and temperature sensors led to unique experimental data of this specific transfer process. The experimental results cover the heat leak, the pressure drop, the transfer rate, the outlet quality, and the cool-down and warm-up behaviour of the examined transfer lines. Based on the obtained results the design of the considered flexible transfer line has been optimised, featuring reduced heat leak and pressure drop.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
He, Lei; Fan, Tao; Hu, Jianguo; Zhang, Lijin
2015-01-01
In this study, a kind of green solvent named polyethylene glycol (PEG) was developed for the ultrasound-assisted extraction (UAE) of magnolol and honokiol from Cortex Magnoliae Officinalis. The effects of PEG molecular weight, PEG concentration, sample size, pH, ultrasonic power and extraction time on the extraction of magnolol and honokiol were investigated to optimise the extraction conditions. Under the optimal extraction conditions, the PEG-based UAE supplied higher extraction efficiencies of magnolol and honokiol than the ethanol-based UAE and traditional ethanol-reflux extraction. Furthermore, the correlation coefficient (R(2)), repeatability (relative standard deviation, n = 6) and recovery confirmed the validation of the proposed extraction method, which were 0.9993-0.9996, 3.1-4.6% and 92.3-106.8%, respectively.
On the optimisation of the use of 3He in radiation portal monitors
NASA Astrophysics Data System (ADS)
Tomanin, Alice; Peerani, Paolo; Janssens-Maenhout, Greet
2013-02-01
Radiation Portal Monitors (RPMs) are used to detect illicit trafficking of nuclear or other radioactive material concealed in vehicles, cargo containers or people at strategic check points, such as borders, seaports and airports. Most of them include neutron detectors for the interception of potential plutonium smuggling. The most common technology used for neutron detection in RPMs is based on 3He proportional counters. The recent severe shortage of this rare and expensive gas has created a problem of capacity for manufacturers to provide enough detectors to satisfy the market demand. In this paper we analyse the design of typical commercial RPMs and try to optimise the detector parameters in order either to maximise the efficiency using the same amount of 3He or minimise the amount of gas needed to reach the same detection performance: by reducing the volume or gas pressure in an optimised design.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
Gürkan, Ramazan; Altunay, Nail
2016-07-01
A simple and efficient ultrasonic-assisted extraction (UAE) procedure has been proposed for the pre-concentration of (2S)-2-[(4-{[(2-amino-4-hydroxypteridin-yl)methyl]amino}phenyl)formamido]pentanedioic acid (folic acid) in vegetables, pharmaceuticals and foods prior to determination at 540 nm using UV-Vis spectrophotometry. The method is based on hydrophobic ternary complex formation of folic acid with silver ions in the presence of cetyltrimethylammonium bromide (CTAB) as a sensitivity enhancer counter ion at pH 7.0, and then extraction into a micellar phase of polyethylene glycol monoalkyl ether (Genapol X-080). The impacts on the extraction efficiency and complex formation of analytical parameters such as sample pH, concentration of silver, concentration of surfactants and extraction time, ultrasonic time and sample volume, were investigated and optimised in detail. The matrix effect on the pre-concentration and determination of folic acid was investigated, and it was observed that the proposed method was highly selective against possible matrix co-extractives. Under optimised conditions, a good linear relationship between the analytical signal and folic acid concentration was obtained in the range of 0.6-180 μg l(-1) with a detection limit of 0.19 μg l(-1) and quantification limit of 0.63 μg l(-1). The applicability was evaluated using samples fortified at different concentration levels, and recoveries higher than 94.1% were obtained. The precision as the percent relative standard deviation (RSD%) was in range of 2.5-3.8% (10 and 40 μg l(-1), n = 5). The proposed method was validated by analysis of two standard reference materials (SRMs) and various real samples, and satisfactory results were obtained.
Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
NASA Astrophysics Data System (ADS)
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
Gómez-Romano, Fernando; Villanueva, Beatriz; Fernández, Jesús; Woolliams, John A; Pong-Wong, Ricardo
2016-01-13
Optimal contribution methods have proved to be very efficient for controlling the rates at which coancestry and inbreeding increase and therefore, for maintaining genetic diversity. These methods have usually relied on pedigree information for estimating genetic relationships between animals. However, with the large amount of genomic information now available such as high-density single nucleotide polymorphism (SNP) chips that contain thousands of SNPs, it becomes possible to calculate more accurate estimates of relationships and to target specific regions in the genome where there is a particular interest in maximising genetic diversity. The objective of this study was to investigate the effectiveness of using genomic coancestry matrices for: (1) minimising the loss of genetic variability at specific genomic regions while restricting the overall loss in the rest of the genome; or (2) maximising the overall genetic diversity while restricting the loss of diversity at specific genomic regions. Our study shows that the use of genomic coancestry was very successful at minimising the loss of diversity and outperformed the use of pedigree-based coancestry (genetic diversity even increased in some scenarios). The results also show that genomic information allows a targeted optimisation to maintain diversity at specific genomic regions, whether they are linked or not. The level of variability maintained increased when the targeted regions were closely linked. However, such targeted management leads to an important loss of diversity in the rest of the genome and, thus, it is necessary to take further actions to constrain this loss. Optimal contribution methods also proved to be effective at restricting the loss of diversity in the rest of the genome, although the resulting rate of coancestry was higher than the constraint imposed. The use of genomic matrices when optimising contributions permits the control of genetic diversity and inbreeding at specific regions of the genome through the minimisation of partial genomic coancestry matrices. The formula used to predict coancestry in the next generation produces biased results and therefore it is necessary to refine the theory of genetic contributions when genomic matrices are used to optimise contributions.
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
NASA Astrophysics Data System (ADS)
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
Hardware Design of the Energy Efficient Fall Detection Device
NASA Astrophysics Data System (ADS)
Skorodumovs, A.; Avots, E.; Hofmanis, J.; Korāts, G.
2016-04-01
Health issues for elderly people may lead to different injuries obtained during simple activities of daily living. Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. In the project "Wireless Sensor Systems in Telecare Application for Elderly People", we have developed a robust fall detection algorithm for a wearable wireless sensor. To optimise the algorithm for hardware performance and test it in field, we have designed an accelerometer based wireless fall detector. Our main considerations were: a) functionality - so that the algorithm can be applied to the chosen hardware, and b) power efficiency - so that it can run for a very long time. We have picked and tested the parts, built a prototype, optimised the firmware for lowest consumption, tested the performance and measured the consumption parameters. In this paper, we discuss our design choices and present the results of our work.
NASA Astrophysics Data System (ADS)
Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.
2017-12-01
In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.
NASA Astrophysics Data System (ADS)
Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele
2017-09-01
The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
Multifamily determination of pesticide residues in soya-based nutraceutical products by GC/MS-MS.
Páleníková, Agneša; Martínez-Domínguez, Gerardo; Arrebola, Francisco Javier; Romero-González, Roberto; Hrouzková, Svetlana; Frenich, Antonia Garrido
2015-04-15
An analytical method based on a modified QuEChERS extraction coupled with gas chromatography-tandem mass spectrometry (GC-MS/MS) was evaluated for the determination of 177 pesticides in soya-based nutraceutical products. The QuEChERS method was optimised and different extraction solvents and clean-up approaches were tested, obtaining the most efficient conditions with a mixture of sorbents (PSA, C18, GBC and Zr-Sep(+)). Recoveries were evaluated at 10, 50 and 100 μg/kg and ranged between 70% and 120%. Precision was expressed as relative standard deviation (RSD), and it was evaluated for more than 160 pesticides as intra and inter-day precision, with values always below 20% and 25%, respectively. Limits of detection (LODs) ranged from 0.1 to 10 μg/kg, whereas limits of quantification (LOQs) from 0.5 to 20 μg/kg. The applicability of the method was proved by analysing soya-based nutraceuticals. Two pesticides were found in these samples, malathion and pyriproxyfen, at 11.1 and 1.5 μg/kg respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven
2015-01-15
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.
Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.
García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M
2014-12-01
Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting
NASA Astrophysics Data System (ADS)
Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy
2018-06-01
Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.
Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA
NASA Astrophysics Data System (ADS)
Chandra, Abhijit; Chattopadhyay, Sudipta
2015-01-01
In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.
Gielecińska, Iwona; Mojska, Hanna
2013-01-01
Numerous studies have demonstrated acrylamide to be both neurotoxic and carcinogenic. At present it is widely recognised that acrylamide is mainly formed through the Maillard reaction from free asparagine and reducing sugars. The major sources of dietary acrylamide are potato products, processed cereals and coffee. To optimise and validate an analytical method for determining acrylamide in coffee by liquid chromatography and tandem mass spectrometry analysis (LC/MS/MS) using SPE clean-up. Analytical separation of acrylamide from roasted coffee was performed by liquid chromatography using a Hypercarb column followed by LC/MS/MS analysis, with 2,3,3-d3 acrylamide as an internal standard. The method was based on two purification steps: the first with hexane and Carrez solutions in order to remove of fat and to precipitate proteins, respectively; and the second with a solid-phase extraction (SPE) column which proved to be efficient in the elimination of the main chromatographic interferences. Limit of quantification (LOQ) for measuring acrylamide in coffee was 50 microg/kg. The described method demonstrates satisfactory precision (RSD = 2.5%), repeatability (RSD = 9.2%) and accuracy (mean recovery - 97.4%). Our results confirm that LC-MS/MS with SPE clean-up is selective and suitable for determination of acrylamide in coffee. Indeed, this method meets the criteria of EU Commission Recommendations (No. 2007/331/EC and No. 2010/307/EU), on the monitoring of acrylamide levels in food.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.
2017-09-01
This paper presents a systematic methodology to analyse the warpage of the side arm part using Autodesk Moldflow Insight software. Response Surface Methodology (RSM) was proposed to optimise the processing parameters that will result in optimal solutions by efficiently minimising the warpage of the side arm part. The variable parameters considered in this study was based on most significant parameters affecting warpage stated by previous researchers, that is melt temperature, mould temperature and packing pressure while adding packing time and cooling time as these is the commonly used parameters by researchers. The results show that warpage was improved by 10.15% and the most significant parameters affecting warpage are packing pressure.
Di Paolo Emilio, M; Festuccia, R; Palladino, L
2015-09-01
In this work, the X-ray emission generated from a plasma produced by focusing Nd-YAG laser beam on the Mylar and Yttrium targets will be characterised. The goal is to reach the best condition that optimises the X-ray conversion efficiency at 500 eV (pre-edge of the Oxigen K-shell), strongly absorbed by carbon-based structures. The characteristics of the microbeam optical system, the software/hardware control and the preliminary measurements of the X-ray fluence will be presented. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes
2018-01-01
In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.
Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems
NASA Astrophysics Data System (ADS)
Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.
2015-05-01
Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.
NASA Astrophysics Data System (ADS)
Zhang, Ding; Zhang, Yingjie
2017-09-01
A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.
Automation of Silica Bead-based Nucleic Acid Extraction on a Centrifugal Lab-on-a-Disc Platform
NASA Astrophysics Data System (ADS)
Kinahan, David J.; Mangwanya, Faith; Garvey, Robert; Chung, Danielle WY; Lipinski, Artur; Julius, Lourdes AN; King, Damien; Mohammadi, Mehdi; Mishra, Rohit; Al-Ofi, May; Miyazaki, Celina; Ducrée, Jens
2016-10-01
We describe a centrifugal microfluidic ‘Lab-on-a-Disc’ (LoaD) technology for DNA purification towards eventual integration into a Sample-to-Answer platform for detection of the pathogen Escherichia coli O157:H7 from food samples. For this application, we use a novel microfluidic architecture which combines ‘event-triggered’ dissolvable film (DF) valves with a reaction chamber gated by a centrifugo-pneumatic siphon valve (CPSV). This architecture permits comprehensive flow control by simple changes in the speed of the platform innate spindle motor. Even before method optimisation, characterisation by DNA fluorescence reveals an extraction efficiency of 58%, which is close to commercial spin columns.
Koo, B K; O'Connell, P E
2006-04-01
The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.
Symmetric digit sets for elliptic curve scalar multiplication without precomputation
Heuberger, Clemens; Mazzoli, Michela
2014-01-01
We describe a method to perform scalar multiplication on two classes of ordinary elliptic curves, namely E:y2=x3+Ax in prime characteristic p≡1mod4, and E:y2=x3+B in prime characteristic p≡1mod3. On these curves, the 4-th and 6-th roots of unity act as (computationally efficient) endomorphisms. In order to optimise the scalar multiplication, we consider a width-w-NAF (Non-Adjacent Form) digit expansion of positive integers to the complex base of τ, where τ is a zero of the characteristic polynomial x2−tx+p of the Frobenius endomorphism associated to the curve. We provide a precomputationless algorithm by means of a convenient factorisation of the unit group of residue classes modulo τ in the endomorphism ring, whereby we construct a digit set consisting of powers of subgroup generators, which are chosen as efficient endomorphisms of the curve. PMID:25190900
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
Ławryńczuk, Maciej
2017-03-01
This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
On damage diagnosis for a wind turbine blade using pattern recognition
NASA Astrophysics Data System (ADS)
Dervilis, N.; Choi, M.; Taylor, S. G.; Barthorpe, R. J.; Park, G.; Farrar, C. R.; Worden, K.
2014-03-01
With the increased interest in implementation of wind turbine power plants in remote areas, structural health monitoring (SHM) will be one of the key cards in the efficient establishment of wind turbines in the energy arena. Detection of blade damage at an early stage is a critical problem, as blade failure can lead to a catastrophic outcome for the entire wind turbine system. Experimental measurements from vibration analysis were extracted from a 9 m CX-100 blade by researchers at Los Alamos National Laboratory (LANL) throughout a full-scale fatigue test conducted at the National Renewable Energy Laboratory (NREL) and National Wind Technology Center (NWTC). The blade was harmonically excited at its first natural frequency using a Universal Resonant EXcitation (UREX) system. In the current study, machine learning algorithms based on Artificial Neural Networks (ANNs), including an Auto-Associative Neural Network (AANN) based on a standard ANN form and a novel approach to auto-association with Radial Basis Functions (RBFs) networks are used, which are optimised for fast and efficient runs. This paper introduces such pattern recognition methods into the wind energy field and attempts to address the effectiveness of such methods by combining vibration response data with novelty detection techniques.
Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino
2017-09-19
The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.
Cottenet, Geoffrey; Blancpain, Carine; Sonnard, Véronique; Chuah, Poh Fong
2013-08-01
Considering the increase of the total cultivated land area dedicated to genetically modified organisms (GMO), the consumers' perception toward GMO and the need to comply with various local GMO legislations, efficient and accurate analytical methods are needed for their detection and identification. Considered as the gold standard for GMO analysis, the real-time polymerase chain reaction (RTi-PCR) technology was optimised to produce a high-throughput GMO screening method. Based on simultaneous 24 multiplex RTi-PCR running on a ready-to-use 384-well plate, this new procedure allows the detection and identification of 47 targets on seven samples in duplicate. To comply with GMO analytical quality requirements, a negative and a positive control were analysed in parallel. In addition, an internal positive control was also included in each reaction well for the detection of potential PCR inhibition. Tested on non-GM materials, on different GM events and on proficiency test samples, the method offered high specificity and sensitivity with an absolute limit of detection between 1 and 16 copies depending on the target. Easy to use, fast and cost efficient, this multiplex approach fits the purpose of GMO testing laboratories.
Muniroh, M S; Sariah, M; Zainal Abidin, M A; Lima, N; Paterson, R R M
2014-05-01
Detection of basal stem rot (BSR) by Ganoderma of oil palms was based on foliar symptoms and production of basidiomata. Enzyme-Linked Immunosorbent Assays-Polyclonal Antibody (ELISA-PAB) and PCR have been proposed as early detection methods for the disease. These techniques are complex, time consuming and have accuracy limitations. An ergosterol method was developed which correlated well with the degree of infection in oil palms, including samples growing in plantations. However, the method was capable of being optimised. This current study was designed to develop a simpler, more rapid and efficient ergosterol method with utility in the field that involved the use of microwave extraction. The optimised procedure involved extracting a small amount of Ganoderma, or Ganoderma-infected oil palm suspended in low volumes of solvent followed by irradiation in a conventional microwave oven at 70°C and medium high power for 30s, resulting in simultaneous extraction and saponification. Ergosterol was detected by thin layer chromatography (TLC) and quantified using high performance liquid chromatography with diode array detection. The TLC method was novel and provided a simple, inexpensive method with utility in the field. The new method was particularly effective at extracting high yields of ergosterol from infected oil palm and enables rapid analysis of field samples on site, allowing infected oil palms to be treated or culled very rapidly. Some limitations of the method are discussed herein. The procedures lend themselves to controlling the disease more effectively and allowing more effective use of land currently employed to grow oil palms, thereby reducing pressure to develop new plantations. Copyright © 2014 Elsevier B.V. All rights reserved.
Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line
NASA Astrophysics Data System (ADS)
Timings, Julian P.; Cole, David J.
2012-06-01
A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
NASA Astrophysics Data System (ADS)
Shaw-Stewart, James; Mattle, Thomas; Lippert, Thomas; Nagel, Matthias; Nüesch, Frank; Wokaun, Alexander
2013-08-01
Laser-induced forward transfer (LIFT) has already been used to fabricate various types of organic light-emitting diodes (OLEDs), and the process itself has been optimised and refined considerably since OLED pixels were first demonstrated. In particular, a dynamic release layer (DRL) of triazene polymer has been used, the environmental pressure has been reduced down to a medium vacuum, and the donor receiver gap has been controlled with the use of spacers. Insight into the LIFT process's effect upon OLED pixel performance is presented here, obtained through optimisation of three-colour polyfluorene-based OLEDs. A marked dependence of the pixel morphology quality on the cathode metal is observed, and the laser transfer fluence dependence is also analysed. The pixel device performances are compared to conventionally fabricated devices, and cathode effects have been looked at in detail. The silver cathode pixels show more heterogeneous pixel morphologies, and a correspondingly poorer efficiency characteristics. The aluminium cathode pixels have greater green electroluminescent emission than both the silver cathode pixels and the conventionally fabricated aluminium devices, and the green emission has a fluence dependence for silver cathode pixels.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.
Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation.
Ertel, K; Banerjee, S; Mason, P D; Phillips, P J; Siebold, M; Hernandez-Gomez, C; Collier, J C
2011-12-19
We present a numerical model of a pulsed, diode-pumped Yb:YAG laser amplifier for the generation of high energy ns-pulses. This model is used to explore how optical-to-optical efficiency depends on factors such as pump duration, pump spectrum, pump intensity, doping concentration, and operating temperature. We put special emphasis on finding ways to achieve high efficiency within the practical limitations imposed by real-world laser systems, such as limited pump brightness and limited damage fluence. We show that a particularly advantageous way of improving efficiency within those constraints is operation at cryogenic temperature. Based on the numerical findings we present a concept for a scalable amplifier based on an end-pumped, cryogenic, gas-cooled multi-slab architecture.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
Optimised design for a 1 kJ diode-pumped solid-state laser system
NASA Astrophysics Data System (ADS)
Mason, Paul D.; Ertel, Klaus; Banerjee, Saumyabrata; Phillips, P. Jonathan; Hernandez-Gomez, Cristina; Collier, John L.
2011-06-01
A conceptual design for a kJ-class diode-pumped solid-state laser (DPSSL) system based on cryogenic gas-cooled multislab ceramic Yb:YAG amplifier technology has been developed at the STFC as a building block towards a MJ-class source for inertial fusion energy (IFE) projects such as HiPER. In this paper, we present an overview of an amplifier design optimised for efficient generation of 1 kJ nanosecond pulses at 10 Hz repetition rate. In order to confirm the viability of this technology, a prototype version of this amplifier scaled to deliver 10 J at 10 Hz, DiPOLE, is under development at the Central Laser Facility. A progress update on the status of this system is also presented.
Energy efficiency in membrane bioreactors.
Barillon, B; Martin Ruel, S; Langlais, C; Lazarova, V
2013-01-01
Energy consumption remains the key factor for the optimisation of the performance of membrane bioreactors (MBRs). This paper presents the results of the detailed energy audits of six full-scale MBRs operated by Suez Environnement in France, Spain and the USA based on on-site energy measurement and analysis of plant operation parameters and treatment performance. Specific energy consumption is compared for two different MBR configurations (flat sheet and hollow fibre membranes) and for plants with different design, loads and operation parameters. The aim of this project was to understand how the energy is consumed in MBR facilities and under which operating conditions, in order to finally provide guidelines and recommended practices for optimisation of MBR operation and design to reduce energy consumption and environmental impacts.
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.
Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin
2014-02-01
Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.
Dwell time-based stabilisation of switched delay systems using free-weighting matrices
NASA Astrophysics Data System (ADS)
Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay
2018-01-01
In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.
3D Reconstruction of human bones based on dictionary learning.
Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin
2017-11-01
An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Robust imaging and gene delivery to study human lymphoblastoid cell lines.
Jolly, Lachlan A; Sun, Ying; Carroll, Renée; Homan, Claire C; Gecz, Jozef
2018-06-20
Lymphoblastoid cell lines (LCLs) have been by far the most prevalent cell type used to study the genetics underlying normal and disease-relevant human phenotypic variation, across personal to epidemiological scales. In contrast, only few studies have explored the use of LCLs in functional genomics and mechanistic studies. Two major reasons are technical, as (1) interrogating the sub-cellular spatial information of LCLs is challenged by their non-adherent nature, and (2) LCLs are refractory to gene transfection. Methodological details relating to techniques that overcome these limitations are scarce, largely inadequate (without additional knowledge and expertise), and optimisation has never been described. Here we compare, optimise, and convey such methods in-depth. We provide a robust method to adhere LCLs to coverslips, which maintained cellular integrity, morphology, and permitted visualisation of sub-cellular structures and protein localisation. Next, we developed the use of lentiviral-based gene delivery to LCLs. Through empirical and combinatorial testing of multiple transduction conditions, we improved transduction efficiency from 3% up to 48%. Furthermore, we established strategies to purify transduced cells, to achieve sustainable cultures containing >85% transduced cells. Collectively, our methodologies provide a vital resource that enables the use of LCLs in functional cell and molecular biology experiments. Potential applications include the characterisation of genetic variants of unknown significance, the interrogation of cellular disease pathways and mechanisms, and high-throughput discovery of genetic modifiers of disease states among others.
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
Evolving aerodynamic airfoils for wind turbines through a genetic algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. J.; Gómez, E.; Grageda, J. I.; Couder, C.; Solís, A.; Hanotel, C. L.; Ledesma, JI
2017-01-01
Nowadays, genetic algorithms stand out for airfoil optimisation, due to the virtues of mutation and crossing-over techniques. In this work we propose a genetic algorithm with arithmetic crossover rules. The optimisation criteria are taken to be the maximisation of both aerodynamic efficiency and lift coefficient, while minimising drag coefficient. Such algorithm shows greatly improvements in computational costs, as well as a high performance by obtaining optimised airfoils for Mexico City's specific wind conditions from generic wind turbines designed for higher Reynolds numbers, in few iterations.
Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark
2016-11-01
Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Storms, S M; Feltus, A; Barker, A R; Joly, M-A; Girard, M
2009-03-01
Measurement of somatropin charged variants by isoelectric focusing was replaced with capillary zone electrophoresis in the January 2006 European Pharmacopoeia Supplement 5.3, based on results from an interlaboratory collaborative study. Due to incompatibilities and method-robustness issues encountered prior to verification, a number of method parameters required optimisation. As the use of a diode array detector at 195 nm or 200 nm led to a loss of resolution, a variable wavelength detector using a 200 nm filter was employed. Improved injection repeatability was obtained by increasing the injection time and pressure, and changing the sample diluent from water to running buffer. Finally, definition of capillary pre-treatment and rinse procedures resulted in more consistent separations over time. Method verification data are presented demonstrating linearity, specificity, repeatability, intermediate precision, limit of quantitation, sample stability, solution stability, and robustness. Based on these experiments, several modifications to the current method have been recommended and incorporated into the European Pharmacopoeia to help improve method performance across laboratories globally.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.
Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K
2012-06-01
Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.
Better powder diffractometers. II—Optimal choice of U, V and W
NASA Astrophysics Data System (ADS)
Cussen, L. D.
2007-12-01
This article presents a technique for optimising constant wavelength (CW) neutron powder diffractometers (NPDs) using conventional nonlinear least squares methods. This is believed to be the first such design optimisation for a neutron spectrometer. The validity of this approach and discussion should extend beyond the Gaussian element approximation used and also to instruments using different radiation, such as X-rays. This approach could later be extended to include vertical and perhaps horizontal focusing monochromators and probably other types of instruments such as three axis spectrometers. It is hoped that this approach will help in comparisons of CW and time-of-flight (TOF) instruments. Recent work showed that many different beam element combinations can give identical resolution on CW NPDs and presented a procedure to find these combinations and also find an "optimum" choice of detector collimation. Those results enable the previous redundancy in the description of instrument performance to be removed and permit a least squares optimisation of design. New inputs are needed and are identified as the sample plane spacing ( dS) of interest in the measurement. The optimisation requires a "quality factor", QPD, chosen here to be minimising the worst Bragg peak separation ability over some measurement range ( dS) while maintaining intensity. Any other QPD desired could be substituted. It is argued that high resolution and high intensity powder diffractometers (HRPDs and HIPDs) should have similar designs adjusted by a single scaling factor. Simulated comparisons are described suggesting significant improvements in performance for CW HIPDs. Optimisation with unchanged wavelength suggests improvements by factors of about 2 for HRPDs and 25 for HIPDs. A recently quantified design trade-off between the maximum line intensity possible and the degree of variation of angular resolution over the scattering angle range leads to efficiency gains at short wavelengths. This in turn leads in practice to another trade-off between this efficiency gain and losses at short wavelength due to technical effects. The exact gains from varying wavelength depend on the details of the short wavelength technical losses. Simulations suggest that the total potential PD performance gains may be very significant-factors of about 3 for HRPDs and more than 90 for HIPDs.
Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M
2018-03-01
This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.
An efficient fermentation method for the degradation of cyanogenic glycosides in flaxseed.
Wu, C-F; Xu, X-M; Huang, S-H; Deng, M-C; Feng, A-J; Peng, J; Yuan, J-P; Wang, J-H
2012-01-01
Recently, flaxseed has become increasingly popular in the health food market because it contains a considerable amount of specific beneficial nutrients such as lignans and omega-3 fatty acids. However, the presence of cyanogenic glycosides (CGs) in flaxseed severely limits the exploitation of its health benefits and nutritive value. We, therefore, developed an effective fermentation method, optimised by response surface methodology (RSM), for degrading CGs with an enzymatic preparation that includes 12.5% β-glucosidase and 8.9% cyanide hydratase. These optimised conditions resulted in a maximum CG degradation level of 99.3%, reducing the concentration of cyanide in the flaxseed power from 1.156 to 0.015 mg g(-1) after 48 h of fermentation. The avoidance of steam heat to evaporate hydrocyanic acid (HCN) results in lower energy consumption and no environmental pollution. In addition, the detoxified flaxseed retained the beneficial nutrients, lignans and fatty acids at the same level as untreated flaxseed, and this method could provide a new means of removing CGs from other edible plants, such as cassava, almond and sorghum by simultaneously expressing cyanide hydratase and β-glucosidase.
NASA Astrophysics Data System (ADS)
Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.
Van Dyk, Jacob; Zubizarreta, Eduardo; Lievens, Yolande
2017-11-01
With increasing recognition of growing cancer incidence globally, efficient means of expanding radiotherapy capacity is imperative, and understanding the factors impacting human and financial needs is valuable. A time-driven activity-based costing analysis was performed, using a base case of 2-machine departments, with defined cost inputs and operating parameters. Four income groups were analysed, ranging from low to high income. Scenario analyses included department size, operating hours, fractionation, treatment complexity, efficiency, and centralised versus decentralised care. The base case cost/course is US$5,368 in HICs, US$2,028 in LICs; the annual operating cost is US$4,595,000 and US$1,736,000, respectively. Economies of scale show cost/course decreasing with increasing department size, mainly related to the equipment cost and most prominent up to 3 linacs. The cost in HICs is two or three times as high as in U-MICs or LICs, respectively. Decreasing operating hours below 8h/day has a dramatic impact on the cost/course. IMRT increases the cost/course by 22%. Centralising preparatory activities has a moderate impact on the costs. The results indicate trends that are useful for optimising local and regional circumstances. This methodology can provide input into a uniform and accepted approach to evaluating the cost of radiotherapy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
Principles of Experimental Design for Big Data Analysis.
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2017-08-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.
Principles of Experimental Design for Big Data Analysis
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2016-01-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686
High power, 1060-nm diode laser with an asymmetric hetero-waveguide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T; Zhang, Yu; Hao, E
2015-07-31
By introducing an asymmetric hetero-waveguide into the epitaxial structure of a diode laser, a 6.21-W output is achieved at a wavelength of 1060 nm. A different design in p- and n-confinement, based on optimisation of energy bands, is used to reduce voltage loss and meet the requirement of high power and high wall-plug efficiency. A 1060-nm diode laser with a single quantum well and asymmetric hetero-structure waveguide is fabricated and analysed. Measurement results show that the asymmetric hetero-structure waveguide can be efficiently used for reducing voltage loss and improving the confinement of injection carriers and wall-plug efficiency. (lasers)
Kanwal, Jasmeen; Smith, Kenny; Culbertson, Jennifer; Kirby, Simon
2017-08-01
The linguist George Kingsley Zipf made a now classic observation about the relationship between a word's length and its frequency; the more frequent a word is, the shorter it tends to be. He claimed that this "Law of Abbreviation" is a universal structural property of language. The Law of Abbreviation has since been documented in a wide range of human languages, and extended to animal communication systems and even computer programming languages. Zipf hypothesised that this universal design feature arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently-his famous Principle of Least Effort. In this study, we use a miniature artificial language learning paradigm to provide direct experimental evidence for this explanatory hypothesis. We show that language users optimise form-meaning mappings only when pressures for accuracy and efficiency both operate during a communicative task, supporting Zipf's conjecture that the Principle of Least Effort can explain this universal feature of word length distributions. Copyright © 2017 Elsevier B.V. All rights reserved.
Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy
NASA Astrophysics Data System (ADS)
Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.
2017-08-01
We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95 <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase <200 ms and for changes in the breathing period of <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.
Yamamoto, Keiichi; Sumi, Eriko; Yamazaki, Toru; Asai, Keita; Yamori, Masashi; Teramukai, Satoshi; Bessho, Kazuhisa; Yokode, Masayuki; Fukushima, Masanori
2012-01-01
Objective The use of electronic medical record (EMR) data is necessary to improve clinical research efficiency. However, it is not easy to identify patients who meet research eligibility criteria and collect the necessary information from EMRs because the data collection process must integrate various techniques, including the development of a data warehouse and translation of eligibility criteria into computable criteria. This research aimed to demonstrate an electronic medical records retrieval system (ERS) and an example of a hospital-based cohort study that identified both patients and exposure with an ERS. We also evaluated the feasibility and usefulness of the method. Design The system was developed and evaluated. Participants In total, 800 000 cases of clinical information stored in EMRs at our hospital were used. Primary and secondary outcome measures The feasibility and usefulness of the ERS, the method to convert text from eligible criteria to computable criteria, and a confirmation method to increase research data accuracy. Results To comprehensively and efficiently collect information from patients participating in clinical research, we developed an ERS. To create the ERS database, we designed a multidimensional data model optimised for patient identification. We also devised practical methods to translate narrative eligibility criteria into computable parameters. We applied the system to an actual hospital-based cohort study performed at our hospital and converted the test results into computable criteria. Based on this information, we identified eligible patients and extracted data necessary for confirmation by our investigators and for statistical analyses with our ERS. Conclusions We propose a pragmatic methodology to identify patients from EMRs who meet clinical research eligibility criteria. Our ERS allowed for the efficient collection of information on the eligibility of a given patient, reduced the labour required from the investigators and improved the reliability of the results. PMID:23117567
O'Brien, Rosaleen; Fitzpatrick, Bridie; Higgins, Maria; Guthrie, Bruce; Watt, Graham; Wyke, Sally
2016-01-01
Objectives To develop and optimise a primary care-based complex intervention (CARE Plus) to enhance the quality of life of patients with multimorbidity in the deprived areas. Methods Six co-design discussion groups involving 32 participants were held separately with multimorbid patients from the deprived areas, voluntary organisations, general practitioners and practice nurses working in the deprived areas. This was followed by piloting in two practices and further optimisation based on interviews with 11 general practitioners, 2 practice nurses and 6 participating multimorbid patients. Results Participants endorsed the need for longer consultations, relational continuity and a holistic approach. All felt that training and support of the health care staff was important. Most participants welcomed the idea of additional self-management support, though some practitioners were dubious about whether patients would use it. The pilot study led to changes including a revised care plan, the inclusion of mindfulness-based stress reduction techniques in the support of practitioners and patients, and the stream-lining of the written self-management support material for patients. Discussion We have co-designed and optimised an augmented primary care intervention involving a whole-system approach to enhance quality of life in multimorbid patients living in the deprived areas. CARE Plus will next be tested in a phase 2 cluster randomised controlled trial. PMID:27068113
Piryaei, Marzieh; Abolghasemi, Mir Mahdi; Nazemiyeh, Hossein
2015-01-01
In this paper, for the first time, an inorganic-organic hybrid material based on ZnO nanoparticles was anchored to a composite made from polythiophene and hexagonally ordered silica (ZnO/PT/SBA-15) for use in solid-phase fibre microextraction (SPME) of medicinal plants. A homemade SPME apparatus was used for the extraction of volatile components of Ziziphora tenuior L. A simplex method was used for optimisation of five different parameters affecting the efficiency of the extraction. The main constituents extracted by ZnO/PT/SBA-15 and PDMS fibres and hydrodistillation (HD) methods, respectively, included pulegone (51.25%, 53.64% and 56.68%), limonene (6.73%, 6.58% and 8.3%), caryophyllene oxide (5.33%, 4.31% and 4.53%) and 1,8-cineole (4.21%, 3.31% and 3.18%). In comparison with the HD method, the proposed technique could equally monitor almost all the components of the sample, in an easier way, in a shorter time and requiring a much lower amount of the sample.
Study on optimal design of 210kW traction IPMSM considering thermal demagnetization characteristics
NASA Astrophysics Data System (ADS)
Kim, Young Hyun; Lee, Seong Soo; Cheon, Byung Chul; Lee, Jung Ho
2018-04-01
This study analyses the permanent magnet (PM) used in the rotor of an interior permanent magnet synchronous motor (IPMSM) used for driving an electric railway vehicle (ERV) in the context of controllable shape, temperature, and external magnetic field. The positioning of the inserted magnets is a degree of freedom in the design of such machines. This paper describes a preliminary analysis using parametric finite-element method performed with the aim of achieving an effective design. Next, features of the experimental design, based on methods such as the central-composition method, Box-Behnken and Taguchi method, are explored to optimise the shape of the high power density. The results are used to produce an optimal design for IPMSMs, with design errors minimized using Maxwell 2D, a commercial program. Furthermore, the demagnetization process is analysed based on the magnetization and demagnetization theory for PM materials in computer simulation. The result of the analysis can be used to calculate the magnetization and demagnetization phenomenon according to the input B-H curve. This paper presents the conditions for demagnetization by the external magnetic field in the driving and stopped states, and proposes a simulation method that can analyse demagnetization phenomena according to each condition and design the IPMSM that maximizes efficiency and torque characteristics. Finally, operational characteristics are analysed in terms of the operation patterns of railway vehicles, and control conditions are deduced to achieve maximum efficiency in all sections. This was experimentally verified.
Integration of PGD-virtual charts into an engineering design process
NASA Astrophysics Data System (ADS)
Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic
2016-04-01
This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.
The Evaluation of Feasibility of Thermal Energy Storage System at Riga TPP-2
NASA Astrophysics Data System (ADS)
Ivanova, P.; Linkevics, O.; Cers, A.
2015-12-01
The installation of thermal energy storage system (TES) provides the optimisation of energy source, energy security supply, power plant operation and energy production flexibility. The aim of the present research is to evaluate the feasibility of thermal energy system installation at Riga TPP-2. The six modes were investigated: four for non-heating periods and two for heating periods. Different research methods were used: data statistic processing, data analysis, analogy, forecasting, financial method and correlation and regression method. In the end, the best mode was chosen - the increase of cogeneration unit efficiency during the summer.
In situ click chemistry: a powerful means for lead discovery.
Sharpless, K Barry; Manetsch, Roman
2006-11-01
Combinatorial chemistry and parallel synthesis are important and regularly applied tools for lead identification and optimisation, although they are often accompanied by challenges related to the efficiency of library synthesis and the purity of the compound library. In the last decade, novel means of lead discovery approaches have been investigated where the biological target is actively involved in the synthesis of its own inhibitory compound. These fragment-based approaches, also termed target-guided synthesis (TGS), show great promise in lead discovery applications by combining the synthesis and screening of libraries of low molecular weight compounds in a single step. Of all the TGS methods, the kinetically controlled variant is the least well known, but it has the potential to emerge as a reliable lead discovery method. The kinetically controlled TGS approach, termed in situ click chemistry, is discussed in this article.
Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E
2018-04-09
Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.
The multiple roles of computational chemistry in fragment-based drug design
NASA Astrophysics Data System (ADS)
Law, Richard; Barker, Oliver; Barker, John J.; Hesterkamp, Thomas; Godemann, Robert; Andersen, Ole; Fryatt, Tara; Courtney, Steve; Hallett, Dave; Whittaker, Mark
2009-08-01
Fragment-based drug discovery (FBDD) represents a change in strategy from the screening of molecules with higher molecular weights and physical properties more akin to fully drug-like compounds, to the screening of smaller, less complex molecules. This is because it has been recognised that fragment hit molecules can be efficiently grown and optimised into leads, particularly after the binding mode to the target protein has been first determined by 3D structural elucidation, e.g. by NMR or X-ray crystallography. Several studies have shown that medicinal chemistry optimisation of an already drug-like hit or lead compound can result in a final compound with too high molecular weight and lipophilicity. The evolution of a lower molecular weight fragment hit therefore represents an attractive alternative approach to optimisation as it allows better control of compound properties. Computational chemistry can play an important role both prior to a fragment screen, in producing a target focussed fragment library, and post-screening in the evolution of a drug-like molecule from a fragment hit, both with and without the available fragment-target co-complex structure. We will review many of the current developments in the area and illustrate with some recent examples from successful FBDD discovery projects that we have conducted.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
H2/H∞ control for grid-feeding converter considering system uncertainty
NASA Astrophysics Data System (ADS)
Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang
2017-05-01
Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.
Mokhtarzadeh, Hossein; Perraton, Luke; Fok, Laurence; Muñoz, Mario A; Clark, Ross; Pivonka, Peter; Bryant, Adam L
2014-09-22
The aim of this paper was to compare the effect of different optimisation methods and different knee joint degrees of freedom (DOF) on muscle force predictions during a single legged hop. Nineteen subjects performed single-legged hopping manoeuvres and subject-specific musculoskeletal models were developed to predict muscle forces during the movement. Muscle forces were predicted using static optimisation (SO) and computed muscle control (CMC) methods using either 1 or 3 DOF knee joint models. All sagittal and transverse plane joint angles calculated using inverse kinematics or CMC in a 1 DOF or 3 DOF knee were well-matched (RMS error<3°). Biarticular muscles (hamstrings, rectus femoris and gastrocnemius) showed more differences in muscle force profiles when comparing between the different muscle prediction approaches where these muscles showed larger time delays for many of the comparisons. The muscle force magnitudes of vasti, gluteus maximus and gluteus medius were not greatly influenced by the choice of muscle force prediction method with low normalised root mean squared errors (<48%) observed in most comparisons. We conclude that SO and CMC can be used to predict lower-limb muscle co-contraction during hopping movements. However, care must be taken in interpreting the magnitude of force predicted in the biarticular muscles and the soleus, especially when using a 1 DOF knee. Despite this limitation, given that SO is a more robust and computationally efficient method for predicting muscle forces than CMC, we suggest that SO can be used in conjunction with musculoskeletal models that have a 1 or 3 DOF knee joint to study the relative differences and the role of muscles during hopping activities in future studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yin, Wenjing; Xu, Zhengliang; Sheng, Jiagen; Xie, Xuetao; Zhang, Changqing
2017-09-01
Erythrocyte sedimentation rate (ESR), which reflects the sedimentation rate of platelets, leukocytes and erythrocytes in response to centrifugal force, may influence the cellular composition of platelet-rich plasma (PRP) obtained via centrifugation methods. However, no relevant studies have substantiated this. In the present study, blood was collected from 40 healthy volunteers and used to prepare PRP with two plasma-based preparation systems [YinPRP and Plasma Rich in Growth Factor (PRGF) systems] and two buffy coat-based systems (RegenPRP and WEGOPRP systems) in a single-donor model. Volumes of PRP and platelet-poor plasma (PPP) that were removed in the preparation process were recorded. Analyses of ESR, haematocrit, C-reaction protein, coagulation, serum glucose and serum lipid of the whole blood used for PRP preparation were performed to evaluate the levels of ESR and the factors known to influence it. Whole blood analysis was performed to evaluate the cellular composition of PRP. Results demonstrated that there were marked positive correlations between the ESR of the whole blood used for PRP preparation and PPP removal efficiencies, platelet concentrations, platelet capture efficiencies and platelet enrichment factors of PRP formulations obtained from plasma-based systems, and PRP yield efficiency of RegenPRP and PPP removal efficiency of WEGOPRP. Furthermore, there were marked negative correlations between ESR and concentrations and enrichment factors of platelets, leukocytes and erythrocytes of RegenPRP. Fibrinogen concentration of the whole blood, which had a marked positive correlation with ESR, also influenced the cellular composition of PRP. These findings may increase the understanding of PRP preparation and provide substantial evidence for the individualised optimisation of PRP preparation systems used in clinical practice.
Gorjanc, Gregor; Hickey, John M
2018-05-02
AlphaMate is a flexible program that optimises selection, maintenance of genetic diversity, and mate allocation in breeding programs. It can be used in animal and cross- and self-pollinating plant populations. These populations can be subject to selective breeding or conservation management. The problem is formulated as a multi-objective optimisation of a valid mating plan that is solved with an evolutionary algorithm. A valid mating plan is defined by a combination of mating constraints (the number of matings, the maximal number of parents, the minimal/equal/maximal number of contributions per parent, or allowance for selfing) that are gender specific or generic. The optimisation can maximize genetic gain, minimize group coancestry, minimize inbreeding of individual matings, or maximize genetic gain for a given increase in group coancestry or inbreeding. Users provide a list of candidate individuals with associated gender and selection criteria information (if applicable) and coancestry matrix. Selection criteria and coancestry matrix can be based on pedigree or genome-wide markers. Additional individual or mating specific information can be included to enrich optimisation objectives. An example of rapid recurrent genomic selection in wheat demonstrates how AlphaMate can double the efficiency of converting genetic diversity into genetic gain compared to truncation selection. Another example demonstrates the use of genome editing to expand the gain-diversity frontier. Executable versions of AlphaMate for Windows, Mac, and Linux platforms are available at http://www.AlphaGenes.roslin.ed.ac.uk/AlphaMate. gregor.gorjanc@roslin.ed.ack.uk.
A Method for Decentralised Optimisation in Networks
NASA Astrophysics Data System (ADS)
Saramäki, Jari
2005-06-01
We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-01-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction. PMID:27445157
Efficient embedding of complex networks to hyperbolic space via their Laplacian
NASA Astrophysics Data System (ADS)
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-07-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction.
Two-machine flow shop scheduling integrated with preventive maintenance planning
NASA Astrophysics Data System (ADS)
Wang, Shijin; Liu, Ming
2016-02-01
This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230
Illias, Hazlee Azil; Zhao Liang, Wee
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.
Assessing the lipophilicity of fragments and early hits
NASA Astrophysics Data System (ADS)
Mortenson, Paul N.; Murray, Christopher W.
2011-07-01
A key challenge in many drug discovery programs is to accurately assess the potential value of screening hits. This is particularly true in fragment-based drug design (FBDD), where the hits often bind relatively weakly, but are correspondingly small. Ligand efficiency (LE) considers both the potency and the size of the molecule, and enables us to estimate whether or not an initial hit is likely to be optimisable to a potent, druglike lead. While size is a key property that needs to be controlled in a small molecule drug, there are a number of additional properties that should also be considered. Lipophilicity is amongst the most important of these additional properties, and here we present a new efficiency index (LLEAT) that combines lipophilicity, size and potency. The index is intuitively defined, and has been designed to have the same target value and dynamic range as LE, making it easily interpretable by medicinal chemists. Monitoring both LE and LLEAT should help both in the selection of more promising fragment hits, and controlling molecular weight and lipophilicity during optimisation.
Advanced data management for optimising the operation of a full-scale WWTP.
Beltrán, Sergio; Maiza, Mikel; de la Sota, Alejandro; Villanueva, José María; Ayesa, Eduardo
2012-01-01
The lack of appropriate data management tools is presently a limiting factor for a broader implementation and a more efficient use of sensors and analysers, monitoring systems and process controllers in wastewater treatment plants (WWTPs). This paper presents a technical solution for advanced data management of a full-scale WWTP. The solution is based on an efficient and intelligent use of the plant data by a standard centralisation of the heterogeneous data acquired from different sources, effective data processing to extract adequate information, and a straightforward connection to other emerging tools focused on the operational optimisation of the plant such as advanced monitoring and control or dynamic simulators. A pilot study of the advanced data manager tool was designed and implemented in the Galindo-Bilbao WWTP. The results of the pilot study showed its potential for agile and intelligent plant data management by generating new enriched information combining data from different plant sources, facilitating the connection of operational support systems, and developing automatic plots and trends of simulated results and actual data for plant performance and diagnosis.
Infrastructure optimisation via MBR retrofit: a design guide.
Bagg, W K
2009-01-01
Wastewater management is continually evolving with the development and implementation of new, more efficient technologies. One of these is the Membrane Bioreactor (MBR). Although a relatively new technology in Australia, MBR wastewater treatment has been widely used elsewhere for over 20 years, with thousands of MBRs now in operation worldwide. Over the past 5 years, MBR technology has been enthusiastically embraced in Australia as a potential treatment upgrade option, and via retrofit typically offers two major benefits: (1) more capacity using mostly existing facilities, and (2) very high quality treated effluent. However, infrastructure optimisation via MBR retrofit is not a simple or low-cost solution and there are many factors which should be carefully evaluated before deciding on this method of plant upgrade. The paper reviews a range of design parameters which should be carefully evaluated when considering an MBR retrofit solution. Several actual and conceptual case studies are considered to demonstrate both advantages and disadvantages. Whilst optimising existing facilities and production of high quality water for reuse are powerful drivers, it is suggested that MBRs are perhaps not always the most sustainable Whole-of-Life solution for a wastewater treatment plant upgrade, especially by way of a retrofit.
NASA Astrophysics Data System (ADS)
Hadade, Ioan; di Mare, Luca
2016-08-01
Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
Wang, Xiaojun; Wu, Long; Cao, Jiaqi; Hong, Xincheng; Ye, Rui; Chen, Weiji; Yuan, Ting
2016-07-01
A novel, simple and rapid method based on magnetic effervescent tablet-assisted ionic liquid dispersive liquid-liquid microextraction (MEA-IL-DLLME) followed by graphite furnace atomic absorption spectrometry (GFAAS) determination was established for the speciation of selenium in various food and beverage samples. In the procedure, a special magnetic effervescent tablet containing CO2 sources (sodium carbonate and sodium dihydrogenphosphate), ionic liquids and Fe3O4 magnetic nanoparticles (MNPs) was used to combine extractant dispersion and magnetic recovery procedures into a single step. The parameters influencing the microextraction efficiency, such as pH of the sample solution, volume of ionic liquid, amount of MNPs, concentration of the chelating agent, salt effect and matrix effect were investigated and optimised. Under the optimised conditions, the limits of detection (LODs) for Se(IV) were 0.021 μg l(-)(1) and the linear dynamic range was 0.05-5.0 μg l(-)(1). The relative standard deviation for seven replicate measurements of 1.0 μg l(-)(1) of Se(IV) was 2.9%. The accuracy of the developed method was evaluated by analysis of the standard reference materials (GBW10016 tea, GBW10017 milk powder, GBW10043 Liaoning rice, GBW10046 Henan wheat, GBW10048 celery). The proposed method was successfully applied to food and beverage samples including black tea, milk powder, mushroom, soybean, bamboo shoots, energy drink, bottled water, carbonated drink and mineral water for the speciation of Se(IV) and Se(VI) with satisfactory relative recoveries (92.0-108.1%).
LCA and emergy accounting of aquaculture systems: towards ecological intensification.
Wilfart, Aurélie; Prudhomme, Jehane; Blancheton, Jean-Paul; Aubin, Joël
2013-05-30
An integrated approach is required to optimise fish farming systems by maximising output while minimising their negative environmental impacts. We developed a holistic approach to assess the environmental performances by combining two methods based on energetic and physical flow analysis. Life Cycle Assessment (LCA) is a normalised method that estimates resource use and potential impacts throughout a product's life cycle. Emergy Accounting (EA) refers the amount of energy directly or indirectly required by a product or a service. The combination of these two methods was used to evaluate the environmental impacts of three contrasting fish-farming systems: a farm producing salmon in a recirculating system (RSF), a semi-extensive polyculture pond (PF1) and an extensive polyculture pond (PF2). The RSF system, with a low feed-conversion ratio (FCR = 0.95), had lower environmental impacts per tonne of live fish produced than did the two pond farms, when the effects on climate change, acidification, total cumulative energy demand, land competition and water dependence were considered. However, RSF was clearly disconnected from the surrounding environment and depended highly on external resources (e.g. nutrients, energy). Ponds adequately incorporated renewable natural resources but had higher environmental impacts due to incomplete use of external inputs. This study highlighted key factors necessary for the successful ecological intensification of fish farming, i.e., minimise external inputs, lower the FCR, and increase the use of renewable resources from the surrounding environment. The combination of LCA and EA seems to be a practical approach to address the complexity of optimising biophysical efficiency in aquaculture systems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimisation of active suspension control inputs for improved performance of active safety systems
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2018-01-01
A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.
Le, Van So; Do, Zoe Phuc-Hien; Le, Minh Khoi; Le, Vicki; Le, Natalie Nha-Truc
2014-06-10
Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of "early elution schedule" was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement certainty with respect to optimising decay/measurement time and product sample activity used for QC quality control. The optimisation ensures a certainty of measurement of the specific impure radionuclide and avoids wasting the useful amount of valuable purified/concentrated daughter nuclide product. This process is important for the spectrometric measurement of very low activity of impure radionuclide contamination in the radioisotope products of much higher activity used in medical imaging and targeted radiotherapy.
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist.
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-10-01
The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 2(3) full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug-excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease.
Optimisation of micro-perforated cylindrical silencers in linear and nonlinear regimes
NASA Astrophysics Data System (ADS)
Bravo, Teresa; Maury, Cédric; Pinhède, Cédric
2016-02-01
This paper describes analytical and experimental studies conducted to understand the potential of lightweight non-fibrous alternatives to dissipative mufflers for in-duct noise control problems, especially under high sound pressure levels (SPLs) and in the low frequency domain. The cost-efficient multi-modal propagation method has been extended to predict nonlinear effects in the dissipation and the transmission loss (TL) of micro-perforated cylindrical liners with sub-millimetric holes diameter. A validation experiment was performed in a standing wave tube to measure the power dissipated and transmitted by a nonlocally reacting liner under moderate and high SPLs. Although nonlinear effects significantly reduce the dissipation and TL around the liner maximum damping frequency, these power quantities may be enhanced below the half-bandwidth resonance. An optimal value of the in-hole peak particle velocity has been found that maximizes the TL of locally reacting liners at low frequencies. Optimisation studies based on dissipation or TL maximization showed the sensitivity of the liner constituting parameters to variations in the design target range such as the center frequency, the levels of acoustic excitation and the nature of the surface impedance (locally or nonlocally reacting). An analysis is proposed of the deviation observed at low frequencies between the optimum impedance of the locally reacting liner under moderate SPLs and Cremer's optimum impedances.
Low-temperature high-Z gamma-detectors with very high energy resolution
NASA Astrophysics Data System (ADS)
Pobes, Carlos; Brofferio, Chiara; Bucci, Carlo; Cremonesi, Oliviero; Fiorini, Ettore; Giuliani, Andrea; Nucciotti, Angelo; Pavan, Maura; Pedretti, Marisa; Pessina, Gianluigi; Pirro, Stefano; Previtali, Ezio; Sisti, Monica; Vanzini, Marco; Zanotti, Luigi
2001-12-01
High-Z low-temperature calorimeters are developed by an Italian collaboration (Milano-Como-Gran Sasso Underground Laboratories) in order to search for rare nuclear events and Dark Matter massive candidates. They exhibit an excellent energy resolution, close to that of Ge-diodes, but a much higher efficiency. Different high-Z materials were initially employed . A many-years optimisation work on tellurium oxide (TeO2) lead to impressive results: devices with total masses around 750 g present FWHM energy resolutions on gamma-ray peaks ranging from 1 KeV (close to the 5 KeV energy threshold) to 2.6 KeV at 2615 KeV (208Tl gamma line). A 3.2 KeV FWHM energy resolution was obtained at 5.4 MeV (210Po alpha line), which is by far the best one ever achieved with any alpha detector. These devices, operated at about 10 mK, consist of a TeO2 single crystal thermally coupled to a 50 mg Neutron Transmutation Doped (NTD) Ge crystal working as a temperature sensor. Special care was devoted to methods for response linearization and temporal stabilisation. Devices based on the same principle and specifically optimised could find applications in several fields like gamma-ray astrophysics, nuclear physics searches, environmental monitoring and radiation metrology.
ATLAS software configuration and build tool optimisation
NASA Astrophysics Data System (ADS)
Rybkin, Grigory; Atlas Collaboration
2014-06-01
ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.
Structural optimisation of cage induction motors using finite element analysis
NASA Astrophysics Data System (ADS)
Palko, S.
The current trend in motor design is to have highly efficient, low noise, low cost, and modular motors with a high power factor. High torque motors are useful in applications like servo motors, lifts, cranes, and rolling mills. This report contains a detailed review of different optimization methods applicable in various design problems. Special attention is given to the performance of different methods, when they are used with finite element analysis (FEA) as an objective function, and accuracy problems arising from the numerical simulations. Also an effective method for designing high starting torque and high efficiency motors is presented. The method described in this work utilizes FEA combined with algorithms for the optimization of the slot geometry. The optimization algorithm modifies the position of the nodal points in the element mesh. The number of independent variables ranges from 14 to 140 in this work.
Higton, D M
2001-01-01
An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.
Clayden, Jonathan D; Storkey, Amos J; Muñoz Maniega, Susana; Bastin, Mark E
2009-04-01
This work describes a reproducibility analysis of scalar water diffusion parameters, measured within white matter tracts segmented using a probabilistic shape modelling method. In common with previously reported neighbourhood tractography (NT) work, the technique optimises seed point placement for fibre tracking by matching the tracts generated using a number of candidate points against a reference tract, which is derived from a white matter atlas in the present study. No direct constraints are applied to the fibre tracking results. An Expectation-Maximisation algorithm is used to fully automate the procedure, and make dramatically more efficient use of data than earlier NT methods. Within-subject and between-subject variances for fractional anisotropy and mean diffusivity within the tracts are then separated using a random effects model. We find test-retest coefficients of variation (CVs) similar to those reported in another study using landmark-guided single seed points; and subject to subject CVs similar to a constraint-based multiple ROI method. We conclude that our approach is at least as effective as other methods for tract segmentation using tractography, whilst also having some additional benefits, such as its provision of a goodness-of-match measure for each segmentation.
Selecting a climate model subset to optimise key ensemble properties
NASA Astrophysics Data System (ADS)
Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.
2018-02-01
End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.
Protocol for a national blood transfusion data warehouse from donor to recipient
van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W
2016-01-01
Introduction Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Study design and methods Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Applications Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. Conclusions The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor–recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. PMID:27491665
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A
2016-01-07
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
Optimising operational amplifiers by evolutionary algorithms and gm/Id method
NASA Astrophysics Data System (ADS)
Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.
2016-10-01
The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.
Cheng, Yu-Huei
2014-12-01
Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R (2) values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes.
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R 2 values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes. PMID:25785274
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
NASA Astrophysics Data System (ADS)
Fu, Shihua; Li, Haitao; Zhao, Guodong
2018-05-01
This paper investigates the evolutionary dynamic and strategy optimisation for a kind of networked evolutionary games whose strategy updating rules incorporate 'bankruptcy' mechanism, and the situation that each player's bankruptcy is due to the previous continuous low profits gaining from the game is considered. First, by using semi-tensor product of matrices method, the evolutionary dynamic of this kind of games is expressed as a higher order logical dynamic system and then converted into its algebraic form, based on which, the evolutionary dynamic of the given games can be discussed. Second, the strategy optimisation problem is investigated, and some free-type control sequences are designed to maximise the total payoff of the whole game. Finally, an illustrative example is given to show that our new results are very effective.
NASA Astrophysics Data System (ADS)
Sánchez, D.; Muñoz de Escalona, J. M.; Monje, B.; Chacartegui, R.; Sánchez, T.
This article presents a novel proposal for complex hybrid systems comprising high temperature fuel cells and thermal engines. In this case, the system is composed by a molten carbonate fuel cell with cascaded hot air turbine and Organic Rankine Cycle (ORC), a layout that is based on subsequent waste heat recovery for additional power production. The work will credit that it is possible to achieve 60% efficiency even if the fuel cell operates at atmospheric pressure. The first part of the analysis focuses on selecting the working fluid of the Organic Rankine Cycle. After a thermodynamic optimisation, toluene turns out to be the most efficient fluid in terms of cycle performance. However, it is also detected that the performance of the heat recovery vapour generator is equally important, what makes R245fa be the most interesting fluid due to its balanced thermal and HRVG efficiencies that yield the highest global bottoming cycle efficiency. When this fluid is employed in the compound system, conservative operating conditions permit achieving 60% global system efficiency, therefore accomplishing the initial objective set up in the work. A simultaneous optimisation of gas turbine (pressure ratio) and ORC (live vapour pressure) is then presented, to check if the previous results are improved or if the fluid of choice must be replaced. Eventually, even if system performance improves for some fluids, it is concluded that (i) R245fa is the most efficient fluid and (ii) the operating conditions considered in the previous analysis are still valid. The work concludes with an assessment about safety-related aspects of using hydrocarbons in the system. Flammability is studied, showing that R245fa is the most interesting fluid also in this regard due to its inert behaviour, as opposed to the other fluids under consideration all of which are highly flammable.
Calibration of phoswich-based lung counting system using realistic chest phantom.
Manohari, M; Mathiyarasu, R; Rajagopal, V; Meenakshisundaram, V; Indira, R
2011-03-01
A phoswich detector, housed inside a low background steel room, coupled with a state-of-art pulse shape discrimination (PSD) electronics is recently established at Radiological Safety Division of IGCAR for in vivo monitoring of actinides. The various parameters of PSD electronics were optimised to achieve efficient background reduction in low-energy regions. The PSD with optimised parameters has reduced steel room background from 9.5 to 0.28 cps in the 17 keV region and 5.8 to 0.3 cps in the 60 keV region. The Figure of Merit for the timing spectrum of the system is 3.0. The true signal loss due to PSD was found to be less than 2 %. The phoswich system was calibrated with Lawrence Livermore National Laboratory realistic chest phantom loaded with (241)Am activity tagged lung set. Calibration factors for varying chest wall composition and chest wall thickness in terms of muscle equivalent chest wall thickness were established. (241)Am activity in the JAERI phantom which was received as a part of IAEA inter-comparison exercise was estimated. This paper presents the optimisation of PSD electronics and the salient results of the calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knight, Stephen P, E-mail: stephen.knight@health.qld.gov.au
The aim of this review was to develop a radiographic optimisation strategy to make use of digital radiography (DR) and needle phosphor computerised radiography (CR) detectors, in order to lower radiation dose and improve image quality for paediatrics. This review was based on evidence-based practice, of which a component was a review of the relevant literature. The resulting exposure chart was developed with two distinct groups of exposure optimisation strategies – body exposures (for head, trunk, humerus, femur) and distal extremity exposures (elbow to finger, knee to toe). Exposure variables manipulated included kilovoltage peak (kVp), target detector exposure and milli-ampere-secondsmore » (mAs), automatic exposure control (AEC), additional beam filtration, and use of antiscatter grid. Mean dose area product (DAP) reductions of up to 83% for anterior–posterior (AP)/posterior–anterior (PA) abdomen projections were recorded postoptimisation due to manipulation of multiple-exposure variables. For body exposures, the target EI and detector exposure, and thus the required mAs were typically 20% less postoptimisation. Image quality for some distal extremity exposures was improved by lowering kVp and increasing mAs around constant entrance skin dose. It is recommended that purchasing digital X-ray equipment with high detective quantum efficiency detectors, and then optimising the exposure chart for use with these detectors is of high importance for sites performing paediatric imaging. Multiple-exposure variables may need to be manipulated to achieve optimal outcomes.« less
Analysis and optimisation of a mixed fluid cascade (MFC) process
NASA Astrophysics Data System (ADS)
Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng
2017-04-01
A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil
2006-06-01
In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.
Craig, Hugh; Berretta, Regina; Moscato, Pablo
2016-01-01
In this study we propose a novel, unsupervised clustering methodology for analyzing large datasets. This new, efficient methodology converts the general clustering problem into the community detection problem in graph by using the Jensen-Shannon distance, a dissimilarity measure originating in Information Theory. Moreover, we use graph theoretic concepts for the generation and analysis of proximity graphs. Our methodology is based on a newly proposed memetic algorithm (iMA-Net) for discovering clusters of data elements by maximizing the modularity function in proximity graphs of literary works. To test the effectiveness of this general methodology, we apply it to a text corpus dataset, which contains frequencies of approximately 55,114 unique words across all 168 written in the Shakespearean era (16th and 17th centuries), to analyze and detect clusters of similar plays. Experimental results and comparison with state-of-the-art clustering methods demonstrate the remarkable performance of our new method for identifying high quality clusters which reflect the commonalities in the literary style of the plays. PMID:27571416
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew
2017-12-01
Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.
Zhao, Xiao-Hui; Han, Fa; Li, Yu-Lin; Yue, Hui-Lan
2013-02-01
Stilbene glycosides are the primary constituents of Rheum tanguticum Maxim. ex Balf., to which different bioactivities has been attributed, including: anti-HIV, anti-oxidant, anti-tumour, anti-malarial, and anti-allergy activity. However, effective methods for the isolation and purification of stilbene glycosides, such as trans-rhapontin, cis-rhapontin and trans-desoxyrhaponticin, from this herb are not currently available. To develop an efficient method for the preparative isolation and purification of three stilbene glycosides from Rheum tanguticum Maxim. ex Balf. via high-speed counter-current chromatography (HSCCC). A solvent system composed of chloroform:n-butanol:methanol:water (4:1:3:2, v/v/v/v) was developed for the separation. The upper phase was used as the stationary phase, and the lower phase was used as the mobile phase. The flow rate was 1.8 mL/min. The apparatus was controlled at 800 rpm and 25 °C, and the effluent was monitored at 280 nm. Chemical constituents were analysed by high-performance liquid chromatography (HPLC), and their structures were identified by ¹H- and ¹³C-NMR. Under the optimised conditions, 25.5 mg trans-rhapontin, 16.0 mg cis-rhapontin and 20.5 mg trans-desoxyrhaponticin were separated from 80 mg crude sample; the isolates had purities of 99.6, 97.2 and 99.2%, respectively. A simple and efficient HSCCC method has been optimised for the preparative separation of stilbene glycosides from Rheum tanguticum Maxim. ex Balf. Copyright © 2012 John Wiley & Sons, Ltd.
Design Optimisation of a Magnetic Field Based Soft Tactile Sensor
Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert
2017-01-01
This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787
Bock, I; Raveh-Amit, H; Losonczi, E; Carstea, A C; Feher, A; Mashayekhi, K; Matyas, S; Dinnyes, A; Pribenszky, C
2016-04-01
The efficiency of various assisted reproductive techniques can be improved by preconditioning the gametes and embryos with sublethal hydrostatic pressure treatment. However, the underlying molecular mechanism responsible for this protective effect remains unknown and requires further investigation. Here, we studied the effect of optimised hydrostatic pressure treatment on the global gene expression of mouse oocytes after embryonic genome activation. Based on a gene expression microarray analysis, a significant effect of treatment was observed in 4-cell embryos derived from treated oocytes, revealing a transcriptional footprint of hydrostatic pressure-affected genes. Functional analysis identified numerous genes involved in protein synthesis that were downregulated in 4-cell embryos in response to hydrostatic pressure treatment, suggesting that regulation of translation has a major role in optimised hydrostatic pressure-induced stress tolerance. We present a comprehensive microarray analysis and further delineate a potential mechanism responsible for the protective effect of hydrostatic pressure treatment.
NASA Astrophysics Data System (ADS)
Aungkulanon, P.; Luangpaiboon, P.
2010-10-01
Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.
In-situ XRD and EDS method study on the oxidation behaviour of Ni-Cu sulphide ore.
Li, Guangshi; Cheng, Hongwei; Xiong, Xiaolu; Lu, Xionggang; Xu, Cong; Lu, Changyuan; Zou, Xingli; Xu, Qian
2017-06-12
The oxidation mechanism of sulfides is the key issue during the sulphide-metallurgy process. In this study, the phase transformation and element migration were clearly demonstrated by in-situ laboratory-based X-ray diffraction (XRD) and energy-dispersive X-ray spectroscopy (EDS), respectively. The reaction sequence and a four-step oxidation mechanism were proposed and identified. The elemental distribution demonstrated that at a low temperature, the Fe atoms diffused outward and the Ni/Cu atoms migrated toward the inner core, whereas the opposite diffusion processes were observed at a higher temperature. Importantly, the unique visual presentation of the oxidation behaviour provided by the combination of in-situ XRD and EDS might be useful for optimising the process parameters to improve the Ni/Cu extraction efficiency during Ni-Cu sulphide metallurgy.
Nanostructured gold and platinum electrodes on silicon structures for biosensing
NASA Astrophysics Data System (ADS)
Ogurtsov, V. I.; Sheehan, M. M.
2005-01-01
Gold and platinum metal electrodes on Si/SiO2 having undergone anisotropic potassium hydroxide (KOH) etch treatment are considered. This treatment etches at different rates and directions in the material resulting in creation of numerous pyramid shaped holes in the silicon substrate. This surface is used to make metal electrodes with increased electrode efficiency. The electrodes can serve as the sensors or as the sensor substrates (for surface polymer modification) and because both gold and platinum are inert they have applications for food safety biosensing. Wine, an economically significant food product, was chosen as a matrix, and impedance spectroscopy (EIS) was selected as a method of investigation of electrode behaviour. Based on results of EIS, different complexity equivalent circuits were determined by applying fitting mean square root optimisation of sensor complex impedance measurements.
A 0.4-2.3 GHz broadband power amplifier extended continuous class-F design technology
NASA Astrophysics Data System (ADS)
Chen, Peng; He, Songbai
2015-08-01
A 0.4-2.3 GHz broadband power amplifier (PA) extended continuous class-F design technology is proposed in this paper. Traditional continuous class-F PA performs in high-efficiency only in one octave bandwidth. With the increasing development of wireless communication, the PA is in demand to cover the mainstream communication standards' working frequencies from 0.4 GHz to 2.2 GHz. In order to achieve this objective, the bandwidths of class-F and continuous class-F PA are analysed and discussed by Fourier series. Also, two criteria, which could reduce the continuous class-F PA's implementation complexity, are presented and explained to investigate the overlapping area of the transistor's current and voltage waveforms. The proposed PA design technology is based on the continuous class-F design method and divides the bandwidth into two parts: the first part covers the bandwidth from 1.3 GHz to 2.3 GHz, where the impedances are designed by the continuous class-F method; the other part covers the bandwidth from 0.4 GHz to 1.3 GHz, where the impedance to guarantee PA to be in high-efficiency over this bandwidth is selected and controlled. The improved particle swarm optimisation is employed for realising the multi-impedances of output and input network. A PA based on a commercial 10 W GaN high electron mobility transistor is designed and fabricated to verify the proposed design method. The simulation and measurement results show that the proposed PA could deliver 40-76% power added efficiency and more than 11 dB power gain with more than 40 dBm output power over the bandwidth from 0.4-2.3 GHz.
Pizarro, C; Pérez-del-Notario, N; González-Sáiz, J M
2010-09-24
A simple, accurate and sensitive method based on headspace solid-phase microextraction (HS-SPME) coupled to gas chromatography-tandem mass spectrometry (GC-MS/MS) was developed for the analysis of 4-ethylguaiacol, 4-ethylphenol, 4-vinylguaiacol and 4-vinylphenol in beer. The effect of the presence of CO2 in the sample on the extraction of analytes was examined. The influence on extraction efficiency of different fibre coatings, of salt addition and stirring was also evaluated. Divinylbenzene/carboxen/polydimethylsiloxane was selected as extraction fibre and was used to evaluate the influence of exposure time, extraction temperature and sample volume/total volume ratio (Vs/Vt) by means of a central composite design (CCD). The optimal conditions identified were 80 degrees C for extraction temperature, 55 min for extraction time and 6 mL of beer (Vs/Vt 0.30). Under optimal conditions, the proposed method showed satisfactory linearity (correlation coefficients between 0.993 and 0.999), precision (between 6.3% and 9.7%) and detection limits (lower than those previously reported for volatile phenols in beers). The method was applied successfully to the analysis of beer samples. To our knowledge, this is the first time that a HS-SPME based method has been developed to determine simultaneously these four volatile phenols in beers. Copyright 2010 Elsevier B.V. All rights reserved.
Radiation exposure in X-ray-based imaging techniques used in osteoporosis
Adams, Judith E.; Guglielmi, Giuseppe; Link, Thomas M.
2010-01-01
Recent advances in medical X-ray imaging have enabled the development of new techniques capable of assessing not only bone quantity but also structure. This article provides (a) a brief review of the current X-ray methods used for quantitative assessment of the skeleton, (b) data on the levels of radiation exposure associated with these methods and (c) information about radiation safety issues. Radiation doses associated with dual-energy X-ray absorptiometry are very low. However, as with any X-ray imaging technique, each particular examination must always be clinically justified. When an examination is justified, the emphasis must be on dose optimisation of imaging protocols. Dose optimisation is more important for paediatric examinations because children are more vulnerable to radiation than adults. Methods based on multi-detector CT (MDCT) are associated with higher radiation doses. New 3D volumetric hip and spine quantitative computed tomography (QCT) techniques and high-resolution MDCT for evaluation of bone structure deliver doses to patients from 1 to 3 mSv. Low-dose protocols are needed to reduce radiation exposure from these methods and minimise associated health risks. PMID:20559834
NASA Astrophysics Data System (ADS)
Hill, Ian; White, Toby; Owen, Sarah
2014-05-01
Extraction and processing of rock materials to produce aggregates is carried out at some 20,000 quarries across the EU. All stages of the processing and transport of hard and dense materials inevitably consume high levels of energy and have consequent significant carbon footprints. The FP7 project "the Energy Efficient Quarry" (EE-Quarry) has been addressing this problem and has devised strategies, supported by modelling software, to assist the quarrying industry to assess and optimise its energy use, and to minimise its carbon footprint. Aggregate quarries across Europe vary enormously in the scale of the quarrying operations, the nature of the worked mineral, and the processing to produce a final market product. Nevertheless most quarries involve most or all of a series of essential stages; deposit assessment, drilling and blasting, loading and hauling, and crushing and screening. The process of determining the energy-efficiency of each stage is complex, but is broadly understood in principle and there are numerous sources of information and guidance available in the literature and on-line. More complex still is the interaction between each of these stages. For example, using a little more energy in blasting to increase fragmentation may save much greater energy in later crushing and screening, but also generate more fines material which is discarded as waste and the embedded energy in this material is lost. Thus the calculation of the embedded energy in the waste material becomes an input to the determination of the blasting strategy. Such feedback loops abound in the overall quarry optimisation. The project has involved research and demonstration operations at a number of quarries distributed across Europe carried out by all partners in the EE-Quarry project, working in collaboration with many of the major quarrying companies operating in the EU. The EE-Quarry project is developing a sophisticated modelling tool, the "EE-Quarry Model" available to the quarrying industry on a web-based platform. This tool guides quarry managers and operators through the complex, multi-layered, iterative, process of assessing the energy efficiency of their own quarry operation. They are able to evaluate the optimisation of the energy-efficiency of the overall quarry through examining both the individual stages of processing, and the interactions between them. The project is also developing on-line distance learning modules designed for Continuous Professional Development (CPD) activities for staff across the quarrying industry in the EU and beyond. The presentation will describe development of the model, and the format and scope of the resulting software tool and its user-support available to the quarrying industry.
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
NASA Astrophysics Data System (ADS)
Li, Ni; Huai, Wenqing; Wang, Shaodan
2017-08-01
C2 (command and control) has been understood to be a critical military component to meet an increasing demand for rapid information gathering and real-time decision-making in a dynamically changing battlefield environment. In this article, to improve a C2 behaviour model's reusability and interoperability, a behaviour modelling framework was proposed to specify a C2 model's internal modules and a set of interoperability interfaces based on the C-BML (coalition battle management language). WTA (weapon target assignment) is a typical C2 autonomous decision-making behaviour modelling problem. Different from most WTA problem descriptions, here sensors were considered to be available resources of detection and the relationship constraints between weapons and sensors were also taken into account, which brought it much closer to actual application. A modified differential evolution (MDE) algorithm was developed to solve this high-dimension optimisation problem and obtained an optimal assignment plan with high efficiency. In case study, we built a simulation system to validate the proposed C2 modelling framework and interoperability interface specification. Also, a new optimisation solution was used to solve the WTA problem efficiently and successfully.
Systemic solutions for multi-benefit water and environmental management.
Everard, Mark; McInnes, Robert
2013-09-01
The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.
Payload Instrument Design Rules for Safe and Efficient Flight Operations
NASA Astrophysics Data System (ADS)
Montagnon, E.; Ferri, P.
2004-04-01
Payload operations are often being neglected in favour of optimisation of scientific performance of the instrument design. This has major drawbacks in terms of cost, safety, efficiency of operations and finally science return. By taking operational aspects into account in the early phases of the instrument design, with a minimum more cultural than financial or technological additional effort, many problems can be avoided or minimized, with significant benefits to be gained in the mission execution phases. This paper presents possible improvements based on the use of the telemetry and telecommand packet standard, proper sharing of autonomy functions between instrument and platform, and enhanced interface documents.
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
Zhang, Lipei; Xing, Xing; Zheng, Lingling; Chen, Zhijian; Xiao, Lixin; Qu, Bo; Gong, Qihuang
2014-01-01
Vertical phase separation of the donor and the acceptor in organic bulk heterojunction solar cells is crucial to improve the exciton dissociation and charge transport efficiencies. This is because whilst the exciton diffusion length is limited, the organic film must be thick enough to absorb sufficient light. However, it is still a challenge to control the phase separation of a binary blend in a bulk heterojunction device architecture. Here we report the realization of vertical phase separation induced by in situ photo-polymerization of the acrylate-based fulleride. The power conversion efficiency of the devices with vertical phase separation increased by 20%. By optimising the device architecture, the power conversion efficiency of the single junction device reached 8.47%. We believe that in situ photo-polymerization of acrylate-based fulleride is a universal and controllable way to realise vertical phase separation in organic blends. PMID:24861168
Scully, N M; Ashu-Arrah, B A; Nagle, A P; Omamogho, J O; O'Sullivan, G P; Friebolin, V; Dietrich, B; Albert, K; Glennon, J D
2011-04-15
Investigations into the preparation of silica hydride intermediate in supercritical carbon dioxide (sc-CO(2)) that avoids the use of organic solvents such as toluene or dioxane are described. The effects of reaction temperature, pressure and time on the surface coverage of the supercritical fluid generated silica hydride intermediate were studied. Under optimised supercritical conditions of 120°C, 483 bar and 3 h reaction time, silica hydride (Si-H) conversion efficiencies of ca. 40% were achieved for the hydride intermediate prepared from a monofunctional silane reagent (dimethylmethoxysilane). Si-H conversion efficiencies (as determined from (29)Si CP-MAS NMR spectral analysis) for the hydride intermediate prepared from triethoxysilane (TES) in sc-CO(2) were found to be comparable to those obtained using a TES silanisation approach in an organic solvent. (13)C and (29)Si CP-MAS-NMR spectroscopy was employed to provide a complete structural assignment of the silica hydride intermediates. Furthermore, supercritical CO(2) was subsequently employed as a reaction medium for the heterogenous hydrosilation of silica hydride with octadecene and with styrene, in the presence of a free radical initiator. These supercritical fluid generated reversed-phase materials were prepared in a substantially reduced reaction time (3 h) compared to organic solvent based methods (100 h reaction time). Silica functionalisation in sc-CO(2) presents an efficient and clean alternative to organic solvent based methods for the preparation of important silica hydride intermediate and silica bonded stationary phases via a hydrosilation approach. Copyright © 2010 Elsevier B.V. All rights reserved.
Turner, Andrew D; Waack, Julia; Lewis, Adam; Edwards, Christine; Lawton, Linda
2018-02-01
A simple, rapid UHPLC-MS/MS method has been developed and optimised for the quantitation of microcystins and nodularin in wide variety of sample matrices. Microcystin analogues targeted were MC-LR, MC-RR, MC-LA, MC-LY, MC-LF, LC-LW, MC-YR, MC-WR, [Asp3] MC-LR, [Dha7] MC-LR, MC-HilR and MC-HtyR. Optimisation studies were conducted to develop a simple, quick and efficient extraction protocol without the need for complex pre-analysis concentration procedures, together with a rapid sub 5min chromatographic separation of toxins in shellfish and algal supplement tablet powders, as well as water and cyanobacterial bloom samples. Validation studies were undertaken on each matrix-analyte combination to the full method performance characteristics following international guidelines. The method was found to be specific and linear over the full calibration range. Method sensitivity in terms of limits of detection, quantitation and reporting were found to be significantly improved in comparison to LC-UV methods and applicable to the analysis of each of the four matrices. Overall, acceptable recoveries were determined for each of the matrices studied, with associated precision and within-laboratory reproducibility well within expected guidance limits. Results from the formalised ruggedness analysis of all available cyanotoxins, showed that the method was robust for all parameters investigated. The results presented here show that the optimised LC-MS/MS method for cyanotoxins is fit for the purpose of detection and quantitation of a range of microcystins and nodularin in shellfish, algal supplement tablet powder, water and cyanobacteria. The method provides a valuable early warning tool for the rapid, routine extraction and analysis of natural waters, cyanobacterial blooms, algal powders, food supplements and shellfish tissues, enabling monitoring labs to supplement traditional microscopy techniques and report toxicity results within a short timeframe of sample receipt. The new method, now accredited to ISO17025 standard, is simple, quick, applicable to multiple matrices and is highly suitable for use as a routine, high-throughout, fast turnaround regulatory monitoring tool. Copyright © 2017 Elsevier B.V. All rights reserved.
An illustration of new methods in machine condition monitoring, Part I: stochastic resonance
NASA Astrophysics Data System (ADS)
Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.
2017-05-01
There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.
Alasonati, Enrica; Fabbri, Barbara; Fettig, Ina; Yardin, Catherine; Del Castillo Busto, Maria Estela; Richter, Janine; Philipp, Rosemarie; Fisicaro, Paola
2015-03-01
In Europe the maximum allowable concentration for tributyltin (TBT) compounds in surface water has been regulated by the water framework directive (WFD) and daughter directive that impose a limit of 0.2 ng L(-1) in whole water (as tributyltin cation). Despite the large number of different methodologies for the quantification of organotin species developed in the last two decades, standardised analytical methods at required concentration level do not exist. TBT quantification at picogram level requires efficient and accurate sample preparation and preconcentration, and maximum care to avoid blank contamination. To meet the WFD requirement, a method for the quantification of TBT in mineral water at environmental quality standard (EQS) level, based on solid phase extraction (SPE), was developed and optimised. The quantification was done using species-specific isotope dilution (SSID) followed by gas chromatography (GC) coupled to inductively coupled plasma mass spectrometry (ICP-MS). The analytical process was optimised using a design of experiment (DOE) based on a factorial fractionary plan. The DOE allowed to evaluate 3 qualitative factors (type of stationary phase and eluent, phase mass and eluent volume, pH and analyte ethylation procedure) for a total of 13 levels studied, and a sample volume in the range of 250-1000 mL. Four different models fitting the results were defined and evaluated with statistic tools: one of them was selected and optimised to find the best procedural conditions. C18 phase was found to be the best stationary phase for SPE experiments. The 4 solvents tested with C18, the pH and ethylation conditions, the mass of the phases, the volume of the eluents and the sample volume can all be optimal, but depending on their respective combination. For that reason, the equation of the model conceived in this work is a useful decisional tool for the planning of experiments, because it can be applied to predict the TBT mass fraction recovery when the experimental conditions are drawn. This work shows that SPE is a convenient technique for TBT pre-concentration at pico-trace levels and a robust approach: in fact (i) number of different experimental conditions led to satisfactory results and (ii) the participation of two institutes to the experimental work did not impact the developed model. Copyright © 2014 Elsevier B.V. All rights reserved.
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
Targeted flock/herd and individual ruminant treatment approaches.
Kenyon, F; Jackson, F
2012-05-04
In Europe, most nematodoses are subclinical involving morbid rather than mortal effects and control is largely achieved using anthelmintics. In cattle, the genera most associated with sub-optimal performance are Ostertagia and Cooperia whereas in sheep and goats, subclinical losses are most often caused by Teladorsagia and Trichostrongylus. In some regions, at certain times, other species such as Nematodirus and Haemonchus also cause disease in sheep and goats. Unfortunately, anthelmintic resistance has now become an issue for European small ruminant producers. One of the key aims of the EU-funded PARASOL project was to identify low input and sustainable approaches to control nematode parasites in ruminants using refugia-based strategies. Two approaches to optimise anthelmintic treatments in sheep and cattle were studied; targeted treatments (TT) - whole-group treatments optimised on the basis of a marker of infection e.g. faecal egg count (FEC), and targeted selected treatment (TST) - treatments given to identified individuals to provide epidemiological and/or production benefits. A number of indicators for TT and TST were assessed to define parasitological and production-system specific indicators for treatment that best suited the regions where the PARASOL studies were conducted. These included liveweight gain, production efficiency, FEC, body condition score and diarrhoea score in small ruminants, and pepsinogen levels and Ostertagia bulk milk tank ELISA in cattle. The PARASOL studies confirmed the value of monitoring FEC as a means of targeting whole-flock treatments in small ruminants. In cattle, bulk milk tank ELISA and serum pepsinogen assays could be used retrospectively to determine the levels of exposure and hence, in the next season to optimise anthelmintic usage. TST approaches in sheep and goats examined production efficiency and liveweight gain as indicators for treatment and confirmed the value of this approach in maintaining performance and anthelmintic susceptibility in the predominant gastrointestinal nematodes. There is good evidence that the TST approach selected less heavily for the development of resistance in comparison to routine monthly treatments. Further research is required to optimise markers for TT and TST but it is also crucial to encourage producers/advisors to adapt these refugia-based strategies to maintain drug susceptible parasites in order to provide sustainable control. Copyright © 2011 Elsevier B.V. All rights reserved.
Efficiency improvement of technological preparation of power equipment manufacturing
NASA Astrophysics Data System (ADS)
Milukov, I. A.; Rogalev, A. N.; Sokolov, V. P.; Shevchenko, I. V.
2017-11-01
Competitiveness of power equipment primarily depends on speeding-up the development and mastering of new equipment samples and technologies, enhancement of organisation and management of design, manufacturing and operation. Actual political, technological and economic conditions cause the acute need in changing the strategy and tactics of process planning. At that the issues of maintenance of equipment with simultaneous improvement of its efficiency and compatibility to domestically produced components are considering. In order to solve these problems, using the systems of computer-aided process planning for process design at all stages of power equipment life cycle is economically viable. Computer-aided process planning is developed for the purpose of improvement of process planning by using mathematical methods and optimisation of design and management processes on the basis of CALS technologies, which allows for simultaneous process design, process planning organisation and management based on mathematical and physical modelling of interrelated design objects and production system. An integration of computer-aided systems providing the interaction of informative and material processes at all stages of product life cycle is proposed as effective solution to the challenges in new equipment design and process planning.
Application-specific coarse-grained reconfigurable array: architecture and design methodology
NASA Astrophysics Data System (ADS)
Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu
2015-06-01
Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.
Poli, G; Dall'Ara, P; Binda, S; Santus, G; Poli, A; Cocilovo, A; Ponti, W
2001-01-01
Recurrent herpes simplex labialis represents a disease still difficult to treat, despite the availability of many established antiviral drugs used in clinical research since 30 years ago. Although differences between the human disease and that obtained in experimental animal suggest caution in predicting an effective clinical response from the experimental results, some of the animal models seem to be useful in optimising the topical formulation of single antiviral drugs. In the present work the dorsal cutaneous guinea pig model was used to compare 5 different topical antiviral formulations with clinical promise (active molecule: 5% w/w micronized aciclovir, CAS 59277-89-3), using both roll-on and lipstick application systems. The aim being to evaluate which vehicle (water, oil, low melting and high melting fatty base) and application system (roll-on, lipstick) enhances the skin penetration and the antiviral activity of the drug, after an experimental intradermal infection with Herpes simplex virus type 1 (HSV-1). As reference, a commercial formulation (5% aciclovir ointment) was used. The cumulative results of this study showed that the formulation A, containing 5% aciclovir in an aqueous base in a roll-on application system, has the better antiviral efficacy in reducing the severity of cutaneous lesions and the viral titer; among the lipsticks preparations, the formulation D, containing 5% aciclovir in a low melting fatty base, demonstrates a very strong antiviral activity, though slightly less than formulation A. This experimental work confirms the validity of the dorsal cutaneous guinea pig model as a rapid and efficient method to compare the antiviral efficacy of new formulations, with clinical promise, to optimise the topical formulation of the active antiviral drugs.
Akhtar, Anam; Wang, Scarlet Xiaoyan; Ghali, Lucy; Bell, Celia; Wen, Xuesong
2018-04-04
Despite the success of arsenic trioxide (ATO) in treating haematological malignancies, its potential to treat solid tumours has not been fully exploited, owing to its dose-limiting toxicity and poor pharmacokinetics. In order to overcome this hurdle, liposomal encapsulation of the drug with different surface charges (neutral, negative, and positive) and sizes (100, 200 and 400 nm) were synthesised and tested on human papilloma virus (HPV)-positive HeLa and HPV-negative HT-3 cervical cancer cell lines. Two epithelial cell lines-human keratinocytes (HK) and human colon cells (CRL-1790)-were used as controls. The synthesised liposomes were tested for their physico-chemical characteristics, drug loading efficiency, and toxicity on the studied cell lines. Neutral liposomes of 100 nm in size were the chosen formulation for delivering ATO into the studied cells, as they showed the least intrinsic cytotoxicity and the highest loading efficiency. The findings demonstrated that the optimised formulation of liposomes was an effective drug delivery method for HPV-infected cervical cancer cells. Furthermore, the toxicity vs. uptake ratio was highest for HeLa cells, while a reduced or minimal toxic effect was observed for non-HPV-infected cervical cancer cells and control cells. These findings may provide a promising therapeutic strategy for effectively managing cervical cancers.
Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.
Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella
2014-11-03
Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.
Study on optimal configuration of the grid-connected wind-solar-battery hybrid power system
NASA Astrophysics Data System (ADS)
Ma, Gang; Xu, Guchao; Ju, Rong; Wu, Tiantian
2017-08-01
The capacity allocation of each energy unit in the grid-connected wind-solar-battery hybrid power system is a significant segment in system design. In this paper, taking power grid dispatching into account, the research priorities are as follows: (1) We establish the mathematic models of each energy unit in the hybrid power system. (2) Based on dispatching of the power grid, energy surplus rate, system energy volatility and total cost, we establish the evaluation system for the wind-solar-battery power system and use a number of different devices as the constraint condition. (3) Based on an improved Genetic algorithm, we put forward a multi-objective optimisation algorithm to solve the optimal configuration problem in the hybrid power system, so we can achieve the high efficiency and economy of the grid-connected hybrid power system. The simulation result shows that the grid-connected wind-solar-battery hybrid power system has a higher comprehensive performance; the method of optimal configuration in this paper is useful and reasonable.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, M.; Rojas, R.
2012-04-01
Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm Optimisation (IPSO), Fully Informed Particle Swarm (FIPS), and weighted FIPS (wFIPS). Finally, an advanced sensitivity analysis using the Latin Hypercube One-At-a-Time (LH-OAT) method and user-friendly plotting summaries facilitate the interpretation and assessment of the calibration/optimisation results. We validate hydroPSO against the standard PSO algorithm (SPSO-2007) employing five test functions commonly used to assess the performance of optimisation algorithms. Additionally, we illustrate how the performance of the optimization/calibration engine is boosted by using several of the fine-tune options included in hydroPSO. Finally, we show how to interface SWAT-2005 with hydroPSO to calibrate a semi-distributed hydrological model for the Ega River basin in Spain, and how to interface MODFLOW-2000 and hydroPSO to calibrate a groundwater flow model for the regional aquifer of the Pampa del Tamarugal in Chile. We limit the applications of hydroPSO to study cases dealing with surface water and groundwater models as these two are the authors' areas of expertise. However, based on the flexibility of hydroPSO we believe this package can be implemented to any model code requiring some form of parameter estimation.
McCoull, William; Addie, Matthew S; Birch, Alan M; Birtles, Susan; Buckett, Linda K; Butlin, Roger J; Bowker, Suzanne S; Boyd, Scott; Chapman, Stephen; Davies, Robert D M; Donald, Craig S; Green, Clive P; Jenner, Chloe; Kemmitt, Paul D; Leach, Andrew G; Moody, Graeme C; Gutierrez, Pablo Morentin; Newcombe, Nicholas J; Nowak, Thorsten; Packer, Martin J; Plowright, Alleyn T; Revill, John; Schofield, Paul; Sheldon, Chris; Stokes, Steve; Turnbull, Andrew V; Wang, Steven J Y; Whalley, David P; Wood, J Matthew
2012-06-15
A novel series of DGAT-1 inhibitors was discovered from an oxadiazole amide high throughput screening (HTS) hit. Optimisation of potency and ligand lipophilicity efficiency (LLE) resulted in a carboxylic acid containing clinical candidate 53 (AZD3988), which demonstrated excellent DGAT-1 potency (0.6 nM), good pharmacokinetics and pre-clinical in vivo efficacy that could be rationalised through a PK/PD relationship. Copyright © 2012 Elsevier Ltd. All rights reserved.
Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan
2017-08-01
Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.
NASA Astrophysics Data System (ADS)
Astley, R. J.; Sugimoto, R.; Mustafi, P.
2011-08-01
Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
NASA Astrophysics Data System (ADS)
Lewis, N. J.; Anderson, P. I.; Gao, Y.; Robinson, F.
2018-04-01
This paper reports the development of a measurement probe which couples local flux density measurements obtained using the needle probe method with the local magnetising field attained via a Hall effect sensor. This determines the variation in magnetic properties including power loss and permeability at increasing distances from the punched edge of 2.4% and 3.2% Si non-oriented electrical steel sample. Improvements in the characterisation of the magnetic properties of electrical steels would aid in optimising the efficiency in the design of electric machines.
Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area
NASA Astrophysics Data System (ADS)
Khare, Vikas; Nema, Savita; Baredar, Prashant
2017-04-01
This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.
Hernández-Sancho, F; Molinos-Senante, M; Sala-Garrido, R
2011-12-01
Efficiency and productivity are important measures for identifying best practice in businesses and optimising resource-use. This study analyses how these two measures change across the period 2003-2008 for 196 wastewater treatment plants (WWTPs) in Spain, by using the benchmarking methods of Data Envelopment Analysis and the Malmquist Productivity Index. To identify which variables contribute to the sustainability of the WWTPs, differences in efficiency scores and productivity indices for external factors are also investigated. Our results indicate that both efficiency and productivity decreased over the five years. We verify that the productivity drop is primarily explained by technical change. Furthermore, certain external variables affected WWTP efficiency, including plant size, treatment technology and energy consumption. However, plants with low energy consumption are the only ones which improve their productivity. Finally, the benchmarking analyses proved to be useful as management tools in the wastewater sector, by providing vital information for improving the sustainability of plants.
A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.
Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie
2018-06-04
Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.
Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method
NASA Astrophysics Data System (ADS)
Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak
2018-03-01
The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
Zhang, Fan; Yang, Yi; Su, Ping; Guo, Zhenku
2009-01-01
Euonymus alatus (Thunb.) has been used as one of traditional Chinese medicines for several thousand years. Conventional methods for the extraction of rutin and quercetin from E. alatus, including solvent extraction, Soxhlet extraction and heat reflux extraction are characterised by long extraction times and consumption of large amounts of solvents. To develop a simple and rapid method for the extraction of rutin and quercetin from the stalks of Euonymus alatus (Thunb.) Sieb using microwave-assisted extraction (MAE) technique. MAE experiments were performed with a multimode microwave extraction system. The experimental variables that affect the MAE process, such as the concentration of ethanol solution, extractant volume, microwave power and extraction time were optimised. Yields were determined by HPLC. The results were compared with that obtained by classical Soxhlet and ultrasonic-assisted extraction (UAE). From the optimised conditions for MAE of rutin and quercetin it can be concluded that the solvent is 50% ethanol (v/v) solution, the extractant volume is 40 mL, microwave power is 170 W and irradiation time is 6 min. Compared with Soxhlet extraction and ultrasonic extraction, microwave extraction is a rapid method with a higher yield and lower solvent consumption. The results showed that MAE can be used as an efficient and rapid method for the extraction of the active components from plants.
Algorithme intelligent d'optimisation d'un design structurel de grande envergure
NASA Astrophysics Data System (ADS)
Dominique, Stephane
The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.
Savareear, Benjamin; Lizak, Radoslaw; Brokl, Michał; Wright, Chris; Liu, Chuan; Focant, Jean-Francois
2017-10-20
A method involving headspace solid-phase microextraction (HS-SPME) and comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry (GC×GC-TOFMS) was developed and optimised to elucidate the volatile composition of the particulate phase fraction of aerosol produced by tobacco heating products (THPs). Three SPME fiber types were studied in terms of extraction capacity and precision measurements. Divinylbenzene polydimethylsiloxane appeared as the most efficient coating for these measurements. A central composite design of experiment was utilised for the optimization of the extraction conditions. Qualitative and semi-quantitative analysis of the headspace above THP aerosol condensate was carried out using optimised extraction conditions. Semi-quantitative analyses of detected constituents were performed by assuming that their relative response factors to the closest internal standard ( i t R ) were equal to 1. Using deconvoluted mass spectral data (library similarity and reverse match >750) and linear retention indices (match window of ±15 index units), 205 peaks were assigned to individual compounds, 82 of which (including 43 substances previously reported to be present in tobacco) have not been reported previously in tobacco aerosol. The major volatile fraction of the headspace contained ketones, alcohols, aldehydes, alicyclic hydrocarbons alkenes, and alkanes. The method was further applied to compare the volatiles from the particulate phase of aerosol composition of THP with that of reference cigarette smoke and showed that the THP produced a less complex chemical mixture. This new method showed good efficiency and precision for the peak areas and peak numbers from the volatile fraction of aerosol particulate phase for both THP and reference cigarettes. Copyright © 2017 Elsevier B.V. All rights reserved.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Strategies for efficient resolution analysis in full-waveform inversion
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Leeuwen, T.; Trampert, J.
2016-12-01
Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Static beam tomotherapy as an optimisation method in whole-breast radiation therapy (WBRT).
Squires, Matthew; Hu, Yunfei; Byrne, Mikel; Archibald-Heeren, Ben; Cheers, Sonja; Bosco, Bruno; Teh, Amy; Fong, Andrew
2017-12-01
TomoTherapy (Accuray, Sunnyvale, CA) has recently introduced a static form of tomotherapy: TomoDirect™ (TD). This study aimed to evaluate TD against a contemporary intensity modulated radiation therapy (IMRT) alternative through comparison of target and organ at risk (OAR) doses in breast cancer cases. A secondary objective was to evaluate planning efficiency by measuring optimisation times. Treatment plans of 27 whole-breast radiation therapy (WBRT) patients optimised with a tangential hybrid IMRT technique were replanned using TD. Parameters included a dynamic field width of 2.5 cm, a pitch of 0.251 and a modulation factor of 2.000; 50 Gy in 25 fractions was prescribed and planning time recorded. The planning metrics used in analysis were ICRU based, with the mean PTV minimum (D 99 ) used as the point of comparison. Both modalities met ICRU50 target heterogeneity objectives (TD D 99 = 48.0 Gy vs. IMRT = 48.1 Gy, P = 0.26; TD D 1 = 53.5 Gy vs. IMRT = 53.0 Gy, P = 0.02; Homogeneity index TD = 0.11 vs. IMRT = 0.10, P = 0.03), with TD plans generating higher median doses (TD D 50 = 51.1 Gy vs. IMRT = 50.9 Gy, P = 0.03). No significant difference was found in prescription dose coverage (TD V 50 = 85.5% vs. IMRT = 82.0%, P = 0.09). TD plans produced a statistically significant reduction in V 5 ipsilateral lung doses (TD V 5 = 23.2% vs. IMRT = 27.2%, P = 0.04), while other queried OARs remained comparable (TD ipsilateral lung V 20 = 13.2% vs. IMRT = 14.6%, P = 0.30; TD heart V 5 = 2.7% vs. IMRT = 2.8%, P = 0.47; TD heart V 10 = 1.7% vs. IMRT = 1.8%, P = 0.44). TD reduced planning time considerably (TD = 9.8 m vs. IMRT = 27.6 m, P < 0.01), saving an average planning time of 17.8 min per patient. TD represents a suitable WBRT treatment approach both in terms of plan quality metrics and planning efficiency. © 2017 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
A novel Laser Ion Mobility Spectrometer
NASA Astrophysics Data System (ADS)
Göbel, J.; Kessler, M.; Langmeier, A.
2009-05-01
IMS is a well know technology within the range of security based applications. Its main advantages lie in the simplicity of measurement, along with a fast and sensitive detection method. Contemporary technology often fails due to interference substances, in conjunction with saturation effects and a low dynamic detection range. High throughput facilities, such as airports, require the analysis of many samples at low detection limits within a very short timeframe. High detection reliability is a requirement for safe and secure operation. In our present work we developed a laser based ion-mobility-sensor which shows several advantages over known IMS sensor technology. The goal of our research was to increase the sensitivity compared to the range of 63Ni based instruments. This was achieved with an optimised geometric drift tube design and a pulsed UV laser system at an efficient intensity. In this intensity range multi-photon ionisation is possible, which leads to higher selectivity in the ion-formation process itself. After high speed capturing of detection samples, a custom designed pattern recognition software toolbox provides reliable auto-detection capability with a learning algorithm and a graphical user interface.
Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project
2005-05-01
thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of
Sanna, Aimaro; Steel, Luc; Maroto-Valer, M Mercedes
2017-03-15
The use of NaHSO 4 to leach out Mg fromlizardite-rich serpentinite (in form of MgSO 4 ) and the carbonation of CO 2 (captured in form of Na 2 CO 3 using NaOH) to form MgCO 3 and Na 2 SO 4 was investigated. Unlike ammonium sulphate, sodium sulphate can be separated via precipitation during the recycling step avoiding energy intensive evaporation process required in NH 4 -based processes. To determine the effectiveness of the NaHSO 4 /NaOH process when applied to lizardite, the optimisation of the dissolution and carbonation steps were performed using a UK lizardite-rich serpentine. Temperature, solid/liquid ratio, particle size, concentration and molar ratio were evaluated. An optimal dissolution efficiency of 69.6% was achieved over 3 h at 100 °C using 1.4 M sodium bisulphate and 50 g/l serpentine with particle size 75-150 μm. An optimal carbonation efficiency of 95.4% was achieved over 30 min at 90 °C and 1:1 magnesium:sodium carbonate molar ratio using non-synthesised solution. The CO 2 sequestration capacity was 223.6 g carbon dioxide/kg serpentine (66.4% in terms of Mg bonded to hydromagnesite), which is comparable with those obtained using ammonium based processes. Therefore, lizardite-rich serpentinites represent a valuable resource for the NaHSO 4 /NaOH based pH swing mineralisation process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dudek-Godeau, Dorota; Kieszkowska-Grudny, Anna; Kwiatkowska, Katarzyna; Bogusz, Joanna; Wysocki, Mirosław J; Bielska-Lasota, Magdalena
The transformation period in Poland is associated with a set of factors seen as ‘socio-economic stress’, which unfavourably influenced cancer treatment and slowed down the progress of the Polish cancer care in the 90’s. These outcomes in many aspects of cancer care may be experienced till today. The results of the international EUROCARE and CONCORD studies based on European data prove evidence that there is a substantial potential for improvement of low 5-year survival rates in Poland. Since high survivals are related to notably efficient health care system, therefore, to improve organization and treatment methods seems to be one of the most important directions of change in the Polish health care system. Till today, cancer care in Poland is based on a network outlined by Professor Koszarowski in the middle of the last century, and is a solid foundation for the contemporary project of the Comprehensive Cancer Care Network (CCCN) proposed in the frame of CanCon Project. Analysis of the structure of health care system and the changes introduced within the network of oncology in Poland since the beginning of the post-commuinist socio-economic transformation in 1989. This study was conducted based on the CanCon methods aimed at reviewing specialist literature and collecting meaningful experiences of European countries in cancer care, including the main legal regulations. The analysis provided evidence that the political situation and the economic crisis of the Transformation period disintegrated the cancer care and resulted in low 5-year survival rates. A step forward in increasing efficiency of the cancer treatment care was a proposal of the ’Quick Oncological Therapy’ together with one more attempt to organize a CCCN. With this paper the Authors contribute to the CanCon Project by exploration, analysis and discussion of the cancer network in Poland as an example of existing net-like structures in Europe as well as by preparation of guidelines for constructing a contemporary CCCN. (1) ‘Socio-economic’ stress adversely affected the efficiency of oncological treatment, both by reducing safety and slowing down the development of modern oncology. (2) Changing the current system into the contemporary form - CCCN could be an important step forward to optimise the oncological health care in Poland. (3) Introduction of the mandatory monitoring of organizational changes with the use of health standardized indicators could allow for the assessment of the effectiveness of implemented solutions and their impact on better prognosis for cancer patients. (4) Optimising the organization of the health care system is possible only by implementing necessary legislative corrections.
Distributed optimisation problem with communication delay and external disturbance
NASA Astrophysics Data System (ADS)
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
Medicines optimisation: priorities and challenges.
Kaufman, Gerri
2016-03-23
Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.
Neffling, Milla-Riina; Lance, Emilie; Meriluoto, Jussi
2010-03-01
Microcystins are cyanobacterial hepatotoxins capable of accumulation into animal tissues. The toxins act by inhibiting specific protein phosphatases and both non-covalent and covalent interactions occur. The 2-methyl-3-methoxy-4-phenylbutyric acid (MMPB) method determines the total, i.e. the sum of free and protein-bound microcystin in tissues. The aim of the method development in this paper was to tackle the problems with the MMPB methodology: the rather laborious workflow and the loss of material during different steps of the method. In the optimised workflow the oxidation recovery was of acceptable level (29-40%), the extraction efficiency good (62-97%), but the signal suppression effect from the matrix remained severe in our system (16-37% signal left). The extraction efficiency for the determination of the free, extractable microcystins, was found to be good, 52-100%, depending on the sample and the toxin variant and concentration. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management
Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo
2017-01-01
Abstract Background and Aims The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012–2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. Methods During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. Results The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Conclusions Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. PMID:28981622
Coscollà, Clara; Navarro-Olivares, Santiago; Martí, Pedro; Yusà, Vicent
2014-02-01
When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. DoE identify significant factors and then optimise a response with respect to them in method development. In this work, a headspace-solid-phase micro-extraction (HS-SPME) combined with gas chromatography tandem mass spectrometry (GC-MS/MS) methodology for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT), triphenyltin (TPhT) has been optimized using a statistical design of experiments (DOE). The analytical method is based on the ethylation with NaBEt4 and simultaneous headspace-solid-phase micro-extraction of the derivative compounds followed by GC-MS/MS analysis. The main experimental parameters influencing the extraction efficiency selected for optimization were pre-incubation time, incubation temperature, agitator speed, extraction time, desorption temperature, buffer (pH, concentration and volume), headspace volume, sample salinity, preparation of standards, ultrasonic time and desorption time in the injector. The main factors (excitation voltage, excitation time, ion source temperature, isolation time and electron energy) affecting the GC-IT-MS/MS response were also optimized using the same statistical design of experiments. The proposed method presented good linearity (coefficient of determination R(2)>0.99) and repeatibilty (1-25%) for all the compounds under study. The accuracy of the method measured as the average percentage recovery of the compounds in spiked surface and marine waters was higher than 70% for all compounds studied. Finally, the optimized methodology was applied to real aqueous samples enabled the simultaneous determination of all compounds under study in surface and marine water samples obtained from Valencia region (Spain). © 2013 Elsevier B.V. All rights reserved.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
Tu, Xijuan; Ma, Shuangqin; Gao, Zhaosheng; Wang, Jing; Huang, Shaokang; Chen, Wenbin
2017-11-01
Flavonoids are frequently found as glycosylated derivatives in plant materials. To determine contents of flavonoid aglycones in these matrices, procedures for the extraction and hydrolysis of flavonoid glycosides are required. The current sample preparation method is both labour and time consuming. Develop a modified matrix solid phase dispersion (MSPD) procedure as an alternative methodology for the one-step extraction and hydrolysis of flavonoid glycosides. HPLC-DAD was applied for demonstrating the one-step extraction and hydrolysis of flavonoids in rape bee pollen. The obtained contents of flavonoid aglycones (quercetin, kaempferol, isorhamnetin) were used for the optimisation and validation of the method. The extraction and hydrolysis were accomplished in one step. The procedure completes in 2 h with silica gel as dispersant, a 1:2 ratio of sample to dispersant, and 60% aqueous ethanol with 0.3 M hydrochloric acid as the extraction solution. The relative standard deviations (RSDs) of repeatability were less than 5%, and the recoveries at two fortified levels were between 88.3 and 104.8%. The proposed methodology is simple and highly efficient, with good repeatability and recovery. Compared with currently available methods, the present work has advantages of using less time and labour, higher extraction efficiency, and less consumption of the acid catalyst. This method may have applications for the one-step extraction and hydrolysis of bioactive compounds from plant materials. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Wei, Xiaojun; Živanović, Stana
2018-05-01
The aim of this paper is to propose a novel theoretical framework for dynamic identification in a structure occupied by a single human. The framework enables the prediction of the dynamics of the human-structure system from the known properties of the individual system components, the identification of human body dynamics from the known dynamics of the empty structure and the human-structure system and the identification of the properties of the structure from the known dynamics of the human and the human-structure system. The novelty of the proposed framework is the provision of closed-form solutions in terms of frequency response functions obtained by curve fitting measured data. The advantages of the framework over existing methods are that there is neither need for nonlinear optimisation nor need for spatial/modal models of the empty structure and the human-structure system. In addition, the second-order perturbation method is employed to quantify the effect of uncertainties in human body dynamics on the dynamic identification of the empty structure and the human-structure system. The explicit formulation makes the method computationally efficient and straightforward to use. A series of numerical examples and experiments are provided to illustrate the working of the method.
Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar
NASA Astrophysics Data System (ADS)
Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd
2017-05-01
Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.
Optimisation of solar synoptic observations
NASA Astrophysics Data System (ADS)
Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal
2012-09-01
The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.
Lu, Chunxia; Wang, Hongxin; Lv, Wenping; Ma, Chaoyang; Lou, Zaixiang; Xie, Jun; Liu, Bo
2012-01-01
Ionic liquid was used as extraction solvents and applied to the extraction of tannins from Galla chinensis in the simultaneous ultrasonic- and microwave-assisted extraction (UMAE) technique. Several parameters of UMAE were optimised, and the results were compared with of the conventional extraction techniques. Under optimal conditions, the content of tannins was 630.2 ± 12.1 mg g⁻¹. Compared with the conventional heat-reflux extraction, maceration extraction, regular ultrasound- and microwave-assisted extraction, the proposed approach exhibited higher efficiency (11.7-22.0% enhanced) and shorter extraction time (from 6 h to 1 min). The tannins were then identified by ultraperformance liquid chromatography tandem mass spectrometry. This study suggests that ionic liquid-based UMAE is an efficient, rapid, simple and green sample preparation technique.
Methling, Torsten; Armbrust, Nina; Haitz, Thilo; Speidel, Michael; Poboss, Norman; Braun-Unkhoff, Marina; Dieter, Heiko; Kempter-Regel, Brigitte; Kraaij, Gerard; Schliessmann, Ursula; Sterr, Yasemin; Wörner, Antje; Hirth, Thomas; Riedel, Uwe; Scheffknecht, Günter
2014-10-01
A new concept is proposed for combined fermentation (two-stage high-load fermenter) and gasification (two-stage fluidised bed gasifier with CO2 separation) of sewage sludge and wood, and the subsequent utilisation of the biogenic gases in a hybrid power plant, consisting of a solid oxide fuel cell and a gas turbine. The development and optimisation of the important processes of the new concept (fermentation, gasification, utilisation) are reported in detail. For the gas production, process parameters were experimentally and numerically investigated to achieve high conversion rates of biomass. For the product gas utilisation, important combustion properties (laminar flame speed, ignition delay time) were analysed numerically to evaluate machinery operation (reliability, emissions). Furthermore, the coupling of the processes was numerically analysed and optimised by means of integration of heat and mass flows. The high, simulated electrical efficiency of 42% including the conversion of raw biomass is promising for future power generation by biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.
Temperature effects on tunable cw Alexandrite lasers under diode end-pumping.
Kerridge-Johns, William R; Damzen, Michael J
2018-03-19
Diode pumped Alexandrite is a promising route to high power, efficient and inexpensive lasers with a broad (701 nm to 858 nm) gain bandwidth; however, there are challenges with its complex laser dynamics. We present an analytical model applied to experimental red diode end-pumped Alexandrite lasers, which enabled a record 54 % slope efficiency with an output power of 1.2 W. A record lowest lasing wavelength (714 nm) and record tuning range (104 nm) was obtained by optimising the crystal temperature between 8 °C and 105 °C in the vibronic mode. The properties of Alexandrite and the analytical model were examined to understand and give general rules in optimising Alexandrite lasers, along with their fundamental efficiency limits. It was found that the lowest threshold laser wavelength was not necessarily the most efficient, and that higher and lower temperatures were optimal for longer and shorter laser wavelengths, respectively. The pump excited to ground state absorption ratio was measured to decrease from 0.8 to 0.7 by changing the crystal temperature from 10 °C to 90 °C.
Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia
2017-01-24
Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.
Usage of humic materials for formulation of stable microbial inoculants
NASA Astrophysics Data System (ADS)
Kydralieva, K. A.; Khudaibergenova, B. M.; Elchin, A. A.; Gorbunova, N. V.; Muratov, V. S.; Jorobekova, Sh. J.
2009-04-01
Some microbes have been domesticated for environment service, for example in a variety of novel applications, including efforts to reduce environmental problems. For instance, antagonistic organisms can be used as biological control agents to reduce the use of chemical pesticides, or efficient degraders can be applied as bioprophylactics to minimise the spread of chemical pollutants. Microorganisms can also be used for the biological clean-up of polluted soil or as plant growth-promoting bacteria that stimulate nutrient uptake. Many microbial applications require large-scale cultivation of the organisms. The biomass production must then be followed by formulation steps to ensure long-term stability and convenient use. However, there remains a need to further develop knowledge on how to optimise fermentation of "non-conventional microorganisms" for environmental applications involving the intact living cells. The goal of presented study is to develop fermentation and formulation techniques for termolabile rhizobacteria isolates - Pseudomonas spp. with major biotechnical potential. Development of efficient and cost-effective media and process parameters giving high cell yields are important priorities. This also involves establishing fermentation parameters yielding cells well adapted to subsequent formulation procedures. Collectively, these strategies will deliver a high proportion of viable cells with good long-term survival. Our main efforts were focused on development of more efficient drying techniques for microorganisms, particularly spray drying and fluidised bed-drying. The advantages of dry formulations are that storage and delivery costs are much lower than for liquid formulations and that long-term survival can be very high if initial packaging is carefully optimised. In order to improve and optimise formulations various kinds of humics-based excipients have been added that have beneficial effects on the viability of the organisms and the storage stability of the product. It is known that humic substances can increase of live organism resistance to stress loads, in particular to chemical stress, low and high temperature. Spray- and fluidized-bed drying and addition of humate-based drying protectants were evaluated for the development of dry formulations of biocontrol and plant growth promoting rhizobacteria. The drying protectants - humic acids and sodium humate gave the highest initial survival rates and the most stable formulations, without significant losses of viability after storage for 1 month at 30oC. As a result, the specific plant growth promoting effect is retained. Thus, humic materials have an unfulfilled potential for biotechnology industries based on such applications. Acknowledgement. This research was supported by the grant of ISTC KR-993.2.
Design of a prototype flow microreactor for synthetic biology in vitro.
Boehm, Christian R; Freemont, Paul S; Ces, Oscar
2013-09-07
As a reference platform for in vitro synthetic biology, we have developed a prototype flow microreactor for enzymatic biosynthesis. We report the design, implementation, and computer-aided optimisation of a three-step model pathway within a microfluidic reactor. A packed bed format was shown to be optimal for enzyme compartmentalisation after experimental evaluation of several approaches. The specific substrate conversion efficiency could significantly be improved by an optimised parameter set obtained by computational modelling. Our microreactor design provides a platform to explore new in vitro synthetic biology solutions for industrial biosynthesis.
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.
The development and optimisation of 3D black-blood R2* mapping of the carotid artery wall.
Yuan, Jianmin; Graves, Martin J; Patterson, Andrew J; Priest, Andrew N; Ruetten, Pascal P R; Usman, Ammara; Gillard, Jonathan H
2017-12-01
To develop and optimise a 3D black-blood R 2 * mapping sequence for imaging the carotid artery wall, using optimal blood suppression and k-space view ordering. Two different blood suppression preparation methods were used; Delay Alternating with Nutation for Tailored Excitation (DANTE) and improved Motion Sensitive Driven Equilibrium (iMSDE) were each combined with a three-dimensional (3D) multi-echo Fast Spoiled GRadient echo (ME-FSPGR) readout. Three different k-space view-order designs: Radial Fan-beam Encoding Ordering (RFEO), Distance-Determined Encoding Ordering (DDEO) and Centric Phase Encoding Order (CPEO) were investigated. The sequences were evaluated through Bloch simulation and in a cohort of twenty volunteers. The vessel wall Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and R 2 *, and the sternocleidomastoid muscle R 2 * were measured and compared. Different numbers of acquisitions-per-shot (APS) were evaluated to further optimise the effectiveness of blood suppression. All sequences resulted in comparable R 2 * measurements to a conventional, i.e. non-blood suppressed sequence in the sternocleidomastoid muscle of the volunteers. Both Bloch simulations and volunteer data showed that DANTE has a higher signal intensity and results in a higher image SNR than iMSDE. Blood suppression efficiency was not significantly different when using different k-space view orders. Smaller APS achieved better blood suppression. The use of blood-suppression preparation methods does not affect the measurement of R 2 *. DANTE prepared ME-FSPGR sequence with a small number of acquisitions-per-shot can provide high quality black-blood R 2 * measurements of the carotid vessel wall. Copyright © 2017 Elsevier Inc. All rights reserved.
The development of response surface pathway design to reduce animal numbers in toxicity studies
2014-01-01
Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
Mghirbi, Oussama; LE Grusse, Philippe; Fabre, Jacques; Mandart, Elisabeth; Bord, Jean-Paul
2017-03-01
The health, environmental and socio-economic issues related to the massive use of plant protection products are a concern for all the stakeholders involved in the agricultural sector. These stakeholders, including farmers and territorial actors, have expressed a need for decision-support tools for the management of diffuse pollution related to plant protection practices and their impacts. To meet the needs expressed by the public authorities and the territorial actors for such decision-support tools, we have developed a technical-economic model "OptiPhy" for risk mitigation based on indicators of pesticide toxicity risk to applicator health (IRSA) and to the environment (IRTE), under the constraint of suitable economic outcomes. This technical-economic optimisation model is based on linear programming techniques and offers various scenarios to help the different actors in choosing plant protection products, depending on their different levels of constraints and aspirations. The health and environmental risk indicators can be broken down into sub-indicators so that management can be tailored to the context. This model for technical-economic optimisation and management of plant protection practices can analyse scenarios for the reduction of pesticide-related risks by proposing combinations of substitution PPPs, according to criteria of efficiency, economic performance and vulnerability of the natural environment. The results of the scenarios obtained on real ITKs in different cropping systems show that it is possible to reduce the PPP pressure (TFI) and reduce toxicity risks to applicator health (IRSA) and to the environment (IRTE) by up to approximately 50 %.
Probabilistic Sizing and Verification of Space Ceramic Structures
NASA Astrophysics Data System (ADS)
Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit
2012-07-01
Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.
Optimising predictor domains for spatially coherent precipitation downscaling
NASA Astrophysics Data System (ADS)
Radanovics, S.; Vidal, J.-P.; Sauquet, E.; Ben Daoud, A.; Bontron, G.
2013-10-01
Statistical downscaling is widely used to overcome the scale gap between predictors from numerical weather prediction models or global circulation models and predictands like local precipitation, required for example for medium-term operational forecasts or climate change impact studies. The predictors are considered over a given spatial domain which is rarely optimised with respect to the target predictand location. In this study, an extended version of the growing rectangular domain algorithm is proposed to provide an ensemble of near-optimum predictor domains for a statistical downscaling method. This algorithm is applied to find five-member ensembles of near-optimum geopotential predictor domains for an analogue downscaling method for 608 individual target zones covering France. Results first show that very similar downscaling performances based on the continuous ranked probability score (CRPS) can be achieved by different predictor domains for any specific target zone, demonstrating the need for considering alternative domains in this context of high equifinality. A second result is the large diversity of optimised predictor domains over the country that questions the commonly made hypothesis of a common predictor domain for large areas. The domain centres are mainly distributed following the geographical location of the target location, but there are apparent differences between the windward and the lee side of mountain ridges. Moreover, domains for target zones located in southeastern France are centred more east and south than the ones for target locations on the same longitude. The size of the optimised domains tends to be larger in the southeastern part of the country, while domains with a very small meridional extent can be found in an east-west band around 47° N. Sensitivity experiments finally show that results are rather insensitive to the starting point of the optimisation algorithm except for zones located in the transition area north of this east-west band. Results also appear generally robust with respect to the archive length considered for the analogue method, except for zones with high interannual variability like in the Cévennes area. This study paves the way for defining regions with homogeneous geopotential predictor domains for precipitation downscaling over France, and therefore de facto ensuring the spatial coherence required for hydrological applications.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
Lukan, Tjaša; Machens, Fabian; Coll, Anna; Baebler, Špela; Messerschmidt, Katrin; Gruden, Kristina
2018-01-01
Cloning multiple DNA fragments for delivery of several genes of interest into the plant genome is one of the main technological challenges in plant synthetic biology. Despite several modular assembly methods developed in recent years, the plant biotechnology community has not widely adopted them yet, probably due to the lack of appropriate vectors and software tools. Here we present Plant X-tender, an extension of the highly efficient, scar-free and sequence-independent multigene assembly strategy AssemblX, based on overlap-depended cloning methods and rare-cutting restriction enzymes. Plant X-tender consists of a set of plant expression vectors and the protocols for most efficient cloning into the novel vector set needed for plant expression and thus introduces advantages of AssemblX into plant synthetic biology. The novel vector set covers different backbones and selection markers to allow full design flexibility. We have included ccdB counterselection, thereby allowing the transfer of multigene constructs into the novel vector set in a straightforward and highly efficient way. Vectors are available as empty backbones and are fully flexible regarding the orientation of expression cassettes and addition of linkers between them, if required. We optimised the assembly and subcloning protocol by testing different scar-less assembly approaches: the noncommercial SLiCE and TAR methods and the commercial Gibson assembly and NEBuilder HiFi DNA assembly kits. Plant X-tender was applicable even in combination with low efficient homemade chemically competent or electrocompetent Escherichia coli. We have further validated the developed procedure for plant protein expression by cloning two cassettes into the newly developed vectors and subsequently transferred them to Nicotiana benthamiana in a transient expression setup. Thereby we show that multigene constructs can be delivered into plant cells in a streamlined and highly efficient way. Our results will support faster introduction of synthetic biology into plant science.
Machens, Fabian; Coll, Anna; Baebler, Špela; Messerschmidt, Katrin; Gruden, Kristina
2018-01-01
Cloning multiple DNA fragments for delivery of several genes of interest into the plant genome is one of the main technological challenges in plant synthetic biology. Despite several modular assembly methods developed in recent years, the plant biotechnology community has not widely adopted them yet, probably due to the lack of appropriate vectors and software tools. Here we present Plant X-tender, an extension of the highly efficient, scar-free and sequence-independent multigene assembly strategy AssemblX, based on overlap-depended cloning methods and rare-cutting restriction enzymes. Plant X-tender consists of a set of plant expression vectors and the protocols for most efficient cloning into the novel vector set needed for plant expression and thus introduces advantages of AssemblX into plant synthetic biology. The novel vector set covers different backbones and selection markers to allow full design flexibility. We have included ccdB counterselection, thereby allowing the transfer of multigene constructs into the novel vector set in a straightforward and highly efficient way. Vectors are available as empty backbones and are fully flexible regarding the orientation of expression cassettes and addition of linkers between them, if required. We optimised the assembly and subcloning protocol by testing different scar-less assembly approaches: the noncommercial SLiCE and TAR methods and the commercial Gibson assembly and NEBuilder HiFi DNA assembly kits. Plant X-tender was applicable even in combination with low efficient homemade chemically competent or electrocompetent Escherichia coli. We have further validated the developed procedure for plant protein expression by cloning two cassettes into the newly developed vectors and subsequently transferred them to Nicotiana benthamiana in a transient expression setup. Thereby we show that multigene constructs can be delivered into plant cells in a streamlined and highly efficient way. Our results will support faster introduction of synthetic biology into plant science. PMID:29300787
Sonochemical cleaning efficiencies in dental instruments
NASA Astrophysics Data System (ADS)
Tiong, T. Joyce; Walmsley, A. Damien; Price, Gareth J.
2012-05-01
Ultrasound has been widely used for cleaning purposes in a variety of situations, including in dental practice. Cleaning is achieved through a combination of acoustically driven streaming effects and sonochemical effects arising from the production of inertial cavitation in a liquid. In our work, various dental instruments used for endodontic (root canal) treatment have been evaluated for their efficiency in producing sonochemical effects in an in-vitro cleaning environment. The areas where cavitation was produced were mapped by monitoring chemiluminescence from luminol solutions and this was correlated with their cleaning efficiencies - assessed by the ability to bleach a dye, to form an emulsion by mixing immiscible components and also to remove ink from a glass surface. The results showed good correlation (Pearson's coefficient > 0.9) between the cavitation and cleaning efficiencies, suggesting that the former plays an important role in cleaning. The methods developed and the results will be beneficial in endodontics research in order to optimise future root canal instruments and treatments.
Cultural-based particle swarm for dynamic optimisation problems
NASA Astrophysics Data System (ADS)
Daneshyari, Moayed; Yen, Gary G.
2012-07-01
Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.
Optimization of Phase Change Memory with Thin Metal Inserted Layer on Material Properties
NASA Astrophysics Data System (ADS)
Harnsoongnoen, Sanchai; Sa-Ngiamsak, Chiranut; Siritaratiwat, Apirat
This works reports, for the first time, the thorough study and optimisation of Phase Change Memory (PCM) structure with thin metal inserted chalcogenide via electrical resistivity (ρ) using finite element modeling. PCM is one of the best candidates for next generation non-volatile memory. It has received much attention recently due to its fast write speed, non-destructive readout, superb scalability, and great compatibility with current silicon-based mass fabrication. The setback of PCM is a high reset current typically higher than 1mA based on 180nm lithography. To reduce the reset current and to solve the over-programming failure, PCM with thin metal inserted chalcogenide (bottom chalcogenide/metal inserted/top chalcogenide) structure has been proposed. Nevertheless, reports on optimisation of the electrical resistivity using the finite element method for this new PCM structure have never been published. This work aims to minimize the reset current of this PCM structure by optimizing the level of the electrical resistivity of the PCM profile using the finite element approach. This work clearly shows that PCM characteristics are strongly affected by the electrical resistivity. The 2-D simulation results reveal clearly that the best thermal transfer of and self-joule-heating at the bottom chalcogenide layer can be achieved under conditions; ρ_bottom chalcogenide > ρ_metal inserted > ρ_top chalcogenide More specifically, the optimized electrical resistivity of PCMTMI is attained with ρ_top chalcogenide: ρ_metal inserted: ρ_bottom chalcogenide ratio of 1:6:16 when ρ_top chalcogenide is 10-3 Ωm. In conclusion, high energy efficiency can be obtained with the reset current as low as 0.3mA and with high speed operation of less than 30ns.
pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data
NASA Astrophysics Data System (ADS)
Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.
The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.
Design and experimental validation of linear and nonlinear vehicle steering control strategies
NASA Astrophysics Data System (ADS)
Menhour, Lghani; Lechner, Daniel; Charara, Ali
2012-06-01
This paper proposes the design of three control laws dedicated to vehicle steering control, two based on robust linear control strategies and one based on nonlinear control strategies, and presents a comparison between them. The two robust linear control laws (indirect and direct methods) are built around M linear bicycle models, each of these control laws is composed of two M proportional integral derivative (PID) controllers: one M PID controller to control the lateral deviation and the other M PID controller to control the vehicle yaw angle. The indirect control law method is designed using an oscillation method and a nonlinear optimisation subject to H ∞ constraint. The direct control law method is designed using a linear matrix inequality optimisation in order to achieve H ∞ performances. The nonlinear control method used for the correction of the lateral deviation is based on a continuous first-order sliding-mode controller. The different methods are designed using a linear bicycle vehicle model with variant parameters, but the aim is to simulate the nonlinear vehicle behaviour under high dynamic demands with a four-wheel vehicle model. These steering vehicle controls are validated experimentally using the data acquired using a laboratory vehicle, Peugeot 307, developed by National Institute for Transport and Safety Research - Department of Accident Mechanism Analysis Laboratory's (INRETS-MA) and their performance results are compared. Moreover, an unknown input sliding-mode observer is introduced to estimate the road bank angle.
Tomaz, Ivana; Maslov, Luna; Stupić, Domagoj; Preiner, Darko; Ašperger, Danijela; Karoglan Kontić, Jasminka
2016-01-01
For the characterisation of grape cultivars, the profile and content of flavonoids are important because these compounds impact grape and wine quality. To determine the correct profile and content of flavonoids, the use of robust, sensitive and reliable methods is necessary. The object of this research is to develop a new ultrasound-assisted extraction (UAE) method for the recovery of flavonoids from grape skins using response surface methodology. Optimisation of UAE was performed using a complementary study combining a Box-Behnken experimental design with qualitative analysis by high-performance liquid chromatography. Optimal extraction conditions were obtained using the extraction solvent composed of acetonitrile:water:formic acid (26:73:1, v/v/v) at an extraction temperature of 50 °C, an extraction time of 15 min in a single-extraction step and with a solid-to-solvent ratio of 1:80 g/mL. The calculated relative standard deviations for the optimal extraction method were very low, measuring less than 5%. This study demonstrates that numerous factors have strong effects on the extraction efficiency, including the type of organic modifier and its percentage in the extraction solvent, the number of extraction steps, the solid-to-solvent ratio, the extraction time and temperature and, finally, the particular nature of analyte and their position within the grape skin cell. Copyright © 2015 John Wiley & Sons, Ltd.
Jakschitz, Thomas A E; Huck, Christian W; Lubbad, Said; Bonn, Günther K
2007-04-13
In this paper the synthesis, optimisation and application of a silane based monolithic copolymer for the rapid separation of proteins and oligonucleotides is described. The monolith was prepared by thermal initiated in situ copolymerisation of trimethylsilyl-4-methylstyrene (TMSiMS) and bis(4-vinylbenzyl)dimethylsilane (BVBDMSi) in a silanised 200 microm I.D. fused silica column. Different ratios of monomer and crosslinker, as well as different ratios of micro- (toluene) and macro-porogen (2-propanol) were used for optimising the physical properties of the stationary phase regarding separation efficiency. The prepared monolithic stationary phases were characterised by measurement of permeability with different solvents, determination of pore size distribution by mercury intrusion porosimetry (MIP). Morphology was studied by scanning electron microscopy (SEM). Applying optimised conditions, a mixture comprised of five standard proteins ribunuclease A, cytochrome c, alpha-lactalbumine, myoglobine and ovalbumine was separated within 1 min by ion-pair reversed-phase liquid chromatography (IP-RPLC) obtaining half-height peak widths between 1.8 and 2.4 s. Baseline separation of oligonucleotides d(pT)(12-18) was achieved within 1.8 min obtaining half-height peak widths between 3.6 and 5.4 s. The results demonstrate the high potential of this stationary phase for fast separation of high-molecular weight biomolecules such as oligonucleotides and proteins.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
Transforming fragments into candidates: small becomes big in medicinal chemistry.
de Kloe, Gerdien E; Bailey, David; Leurs, Rob; de Esch, Iwan J P
2009-07-01
Fragment-based drug discovery (FBDD) represents a logical and efficient approach to lead discovery and optimisation. It can draw on structural, biophysical and biochemical data, incorporating a wide range of inputs, from precise mode-of-binding information on specific fragments to wider ranging pharmacophoric screening surveys using traditional HTS approaches. It is truly an enabling technology for the imaginative medicinal chemist. In this review, we analyse a representative set of 23 published FBDD studies that describe how low molecular weight fragments are being identified and efficiently transformed into higher molecular weight drug candidates. FBDD is now becoming warmly endorsed by industry as well as academia and the focus on small interacting molecules is making a big scientific impact.
A novel sleep optimisation programme to improve athletes' well-being and performance.
Van Ryswyk, Emer; Weeks, Richard; Bandick, Laura; O'Keefe, Michaela; Vakulin, Andrew; Catcheside, Peter; Barger, Laura; Potter, Andrew; Poulos, Nick; Wallace, Jarryd; Antic, Nick A
2017-03-01
To improve well-being and performance indicators in a group of Australian Football League (AFL) players via a six-week sleep optimisation programme. Prospective intervention study following observations suggestive of reduced sleep and excessive daytime sleepiness in an AFL group. Athletes from the Adelaide Football Club were invited to participate if they had played AFL senior-level football for 1-5 years, or if they had excessive daytime sleepiness (Epworth Sleepiness Scale [ESS] >10), measured via ESS. An initial education session explained normal sleep needs, and how to achieve increased sleep duration and quality. Participants (n = 25) received ongoing feedback on their sleep, and a mid-programme education and feedback session. Sleep duration, quality and related outcomes were measured during week one and at the conclusion of the six-week intervention period using sleep diaries, actigraphy, ESS, Pittsburgh Sleep Quality Index, Profile of Mood States, Training Distress Scale, Perceived Stress Scale and the Psychomotor Vigilance Task. Sleep diaries demonstrated an increase in total sleep time of approximately 20 min (498.8 ± 53.8 to 518.7 ± 34.3; p < .05) and a 2% increase in sleep efficiency (p < 0.05). There was a corresponding increase in vigour (p < 0.001) and decrease in fatigue (p < 0.05). Improvements in measures of sleep efficiency, fatigue and vigour indicate that a sleep optimisation programme may improve athletes' well-being. More research is required into the effects of sleep optimisation on athletic performance.
Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian
2005-01-01
This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.
Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz
2018-03-01
The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Torres Padrón, M E; Sosa Ferrera, Z; Santana Rodríguez, J J
2006-09-01
A solid-phase microextraction (SPME) procedure using two commercial fibers coupled with high-performance liquid chromatography (HPLC) is presented for the extraction and determination of organochlorine pesticides in water samples. We have evaluated the extraction efficiency of this kind of compound using two different fibers: 60-mum polydimethylsiloxane-divinylbenzene (PDMS-DVB) and Carbowax/TPR-100 (CW/TPR). Parameters involved in the extraction and desorption procedures (e.g. extraction time, ionic strength, extraction temperature, desorption and soaking time) were studied and optimized to achieve the maximum efficiency. Results indicate that both PDMS-DVB and CW/TPR fibers are suitable for the extraction of this type of compound, and a simple calibration curve method based on simple aqueous standards can be used. All the correlation coefficients were better than 0.9950, and the RSDs ranged from 7% to 13% for 60-mum PDMS-DVB fiber and from 3% to 10% for CW/TPR fiber. Optimized procedures were applied to the determination of a mixture of six organochlorine pesticides in environmental liquid samples (sea, sewage and ground waters), employing HPLC with UV-diode array detector.
Factors affecting weld root morphology in laser keyhole welding
NASA Astrophysics Data System (ADS)
Frostevarg, Jan
2018-02-01
Welding production efficiency is usually optimised if full penetration can be achieved in a single pass. Techniques such as electron and laser beam welding offer deep high speed keyhole welding, especially since multi-kilowatt lasers became available. However, there are limitations for these techniques when considering weld imperfections such as weld cap undercuts, interior porosity or humps at the root. The thickness of sheets during full penetration welding is practically limited by these root humps. The mechanisms behind root morphology formation are not yet satisfactory understood. In this paper root humping is studied by reviewing previous studies and findings and also by sample examination and process observation by high speed imaging. Different process regimes governing root quality are presented, categorized and explained. Even though this study mainly covers laser beam and laser arc hybrid welding, the presented findings can generally be applied full penetration welding in medium to thick sheets, especially the discussion of surface tension effects. As a final result of this analysis, a map of methods to optimise weld root topology is presented.
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.
NASA Astrophysics Data System (ADS)
Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.
2016-03-01
Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
Soh, Josephine Lay Peng; Grachet, Maud; Whitlock, Mark; Lukas, Timothy
2013-02-01
This is a study to fully assess a commercially available co-processed mannitol for its usefulness as an off-the-shelf excipient for developing orally disintegrating tablets (ODTs) by direct compression on a pilot scale (up to 4 kg). This work encompassed material characterization, formulation optimisation and process robustness. Overall, this co-processed mannitol possessed favourable physical attributes including low hygroscopicity and compactibility. Two design-of-experiments (DoEs) were used to screen and optimise the placebo formulation. Xylitol and crospovidone concentrations were found to have the most significant impact on disintegration time (p < 0.05). Higher xylitol concentrations retarded disintegration. Avicel PH102 promoted faster disintegration than PH101, at higher levels of xylitol. Without xylitol, higher crospovidone concentrations yielded faster disintegration and reduced tablet friability. Lubrication sensitivity studies were later conducted at two fill loads, three levels for lubricant concentration and number of blend rotations. Even at 75% fill load, the design space plot showed that 1.5% lubricant and 300 blend revolutions were sufficient to manufacture ODTs with ≤ 0.1% friability and disintegrated within 15 s. This study also describes results using a modified disintegration method based on the texture analyzer as an alternative to the USP method.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2014-01-01
Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sivadas, Namitha Arackal; Mohammed, Sameer Saheerudeen
2017-02-01
In non-contiguous orthogonal frequency division multiplexing (NC-OFDM)-based interweave cognitive radio networks, the sidelobe power of secondary users (SUs) must be strictly controlled to avoid the interference between the SUs and the primary users (PUs) of the adjacent bands. Similarly, the inherent issue of high peak-to-average power ratio (PAPR) of the OFDM signal is another drawback of the cognitive radio communication system based on the NC-OFDM technology. A few methods are available in the literature to solve either of these problems individually, while in this paper, we propose a new method for the joint minimisation of sidelobe power and PAPR in NC-OFDM-based cognitive radio networks using Zadoff-Chu (ZC) sequence. In this method, the sidelobe power suppression of SUs is benefited by PUs and the PAPR is reduced for SUs. We modelled a new optimisation problem for minimising the sidelobe power with a constraint on the maximum tolerable PAPR and sidelobe power. The proper selection of ZC sequence, which is crucial for minimising both the issues simultaneously, is achieved by solving the proposed optimisation problem. The proposed technique is shown to provide 7 dB and 20 dB reduction in PAPR and sidelobe power, respectively, without causing any signal distortion along with the improvement in bit error rate (BER) performance.
A Robust, Water-Based, Functional Binder Framework for High-Energy Lithium-Sulfur Batteries.
Lacey, Matthew J; Österlund, Viking; Bergfelt, Andreas; Jeschull, Fabian; Bowden, Tim; Brandell, Daniel
2017-07-10
We report here a water-based functional binder framework for the lithium-sulfur battery systems, based on the general combination of a polyether and an amide-containing polymer. These binders are applied to positive electrodes optimised towards high-energy electrochemical performance based only on commercially available materials. Electrodes with up to 4 mAh cm -2 capacity and 97-98 % coulombic efficiency are achievable in electrodes with a 65 % total sulfur content and a poly(ethylene oxide):poly(vinylpyrrolidone) (PEO:PVP) binder system. Exchange of either binder component for a different polymer with similar functionality preserves the high capacity and coulombic efficiency. The improvement in coulombic efficiency from the inclusion of the coordinating amide group was also observed in electrodes where pyrrolidone moieties were covalently grafted to the carbon black, indicating the role of this functionality in facilitating polysulfide adsorption to the electrode surface. The mechanical properties of the electrodes appear not to significantly influence sulfur utilisation or coulombic efficiency in the short term but rather determine retention of these properties over extended cycling. These results demonstrate the robustness of this very straightforward approach, as well as the considerable scope for designing binder materials with targeted properties. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Altunay, Nail; Gürkan, Ramazan
2017-03-01
A new and simple ultrasonic-assisted extraction (UAE) procedure was developed for the determination of inorganic arsenic and selenium in foods by hydride-generation atomic absorption spectrometry (HG-AAS). The various analytical variables affecting complex formation and extraction efficiency were investigated and optimised. The method is based on selective complex formation of As(III) and Se(IV) in the presence of excess As(V) and Se(VI) with toluidine red in the presence of tartaric acid at pH 4.5, and then extraction of the resulting condensation products into the micellar phase of non-ionic surfactant, polyethylene glycol dodecyl ether, Brij 35. Under optimised conditions, good linear relationships were obtained in the ranges of 4-225 and 12-400 ng l - 1 with limits of detection of 1.1 and 3.5 ng l - 1 for As(III) and Se(IV), respectively. The repeatability was better than 3.9% for both analytes (n = 10, 25 ng l - 1 ) while reproducibility ranged from 4.2% to 4.8%. The recoveries of As(III) and Se(IV) spiked at 25-100 ng l - 1 were in the range of 94.2-104.8%. After pre-concentration of a 5.0 ml sample, the sensitivity enhancement factors for As(III) and Se(IV) were 185 and 140, respectively. Accuracy was assessed by analysis of two standard reference materials (SRMs) and spiked recovery experiments. The method was successfully applied to the accurate and reliable determination of total As and total Se by HG-AAS after pre-reduction with a mixture of L-cysteine and tartaric acid. Finally, the method was shown to be rapid and sensitive, with good results for extraction, pre-concentration and determination of total As and Se contents (as As(III) and Se(IV)) from food samples.
2016-10-31
statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally
Bratkowska, D; Fontanals, N; Cormack, P A G; Borrull, F; Marcé, R M
2012-02-17
A monolithic, hydrophilic stir bar coating based upon a copolymer of methacrylic acid and divinylbenzene [poly(MAA-co-DVB)] was synthesised and evaluated as a new polymeric phase for the stir bar sorptive extraction (SBSE) of polar compounds from complex environmental water samples. The experimental conditions for the extraction and liquid desorption in SBSE were optimised. Liquid chromatography-triple quadrupole mass spectrometry (LC-MS/MS) was used for the determination of a group of polar pharmaceuticals in environmental water matrices. The extraction performance of the poly(MAA-co-DVB) stir bar was compared to the extraction performance of a commercially available polydimethylsiloxane stir bar; it was found that the former gave rise to significantly higher extraction efficiency of polar analytes (% recovery values near to 100% for most of the studied analytes) than the commercial product. The developed method was applied to determine the studied analytes at low ng L⁻¹ in different complex environmental water samples. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Shukla, Anupam
2018-03-01
Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.
NASA Astrophysics Data System (ADS)
Dal Bianco, N.; Lot, R.; Matthys, K.
2018-01-01
This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.
De Greef, J; Villani, K; Goethals, J; Van Belle, H; Van Caneghem, J; Vandecasteele, C
2013-11-01
Due to ongoing developments in the EU waste policy, Waste-to-Energy (WtE) plants are to be optimized beyond current acceptance levels. In this paper, a non-exhaustive overview of advanced technical improvements is presented and illustrated with facts and figures from state-of-the-art combustion plants for municipal solid waste (MSW). Some of the data included originate from regular WtE plant operation - before and after optimisation - as well as from defined plant-scale research. Aspects of energy efficiency and (re-)use of chemicals, resources and materials are discussed and support, in light of best available techniques (BAT), the idea that WtE plant performance still can be improved significantly, without direct need for expensive techniques, tools or re-design. In first instance, diagnostic skills and a thorough understanding of processes and operations allow for reclaiming the silent optimisation potential. Copyright © 2013 Elsevier Ltd. All rights reserved.
Anstey, Kaarin J; Bielak, Allison AM; Birrell, Carole L; Browning, Colette J; Burns, Richard A; Byles, Julie; Kiley, Kim M; Nepal, Binod; Ross, Lesley A; Steel, David; Windsor, Timothy D
2014-01-01
Aim To describe the Dynamic Analyses to Optimise Ageing (DYNOPTA) project and illustrate its contributions to understanding ageing through innovative methodology, and investigations on outcomes based on the project themes. DYNOPTA provides a platform and technical expertise that may be used to combine other national and international datasets. Method The DYNOPTA project has pooled and harmonized data from nine Australian longitudinal studies to create the largest available longitudinal dataset (N=50652) on ageing in Australia. Results A range of findings have resulted from the study to date, including methodological advances, prevalence rates of disease and disability, and mapping trajectories of ageing with and without increasing morbidity. DYNOPTA also forms the basis of a microsimulation model that will provide projections of future costs of disease and disability for the baby boomer cohort. Conclusion DYNOPTA contributes significantly to the Australian evidence-base on ageing to inform key social and health policy domains. PMID:22032767
On the performance of energy detection-based CR with SC diversity over IG channel
NASA Astrophysics Data System (ADS)
Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka
2017-12-01
Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.
Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.
Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D
2011-12-12
We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Uyttenhove, W.; Sobolev, V.; Maschek, W.
2011-09-01
A potential option for neutralization of minor actinides (MA) accumulated in spent nuclear fuel of light water reactors (LWRs) is their transmutation in dedicated accelerator-driven systems (ADS). A promising fuel candidate dedicated to MA transmutation is a CERMET composite with Mo metal matrix and (Pu, Np, Am, Cm)O 2-x fuel particles. Results of optimisation studies of the CERMET fuel targeting to increasing the MA transmutation efficiency of the EFIT (European Facility for Industrial Transmutation) core are presented. In the adopted strategy of MA burning the plutonium (Pu) balance of the core is minimized, allowing a reduction in the reactivity swing and the peak power form-factor deviation and an extension of the cycle duration. The MA/Pu ratio is used as a variable for the fuel optimisation studies. The efficiency of MA transmutation is close to the foreseen theoretical value of 42 kg TW -1 h -1 when level of Pu in the actinide mixture is about 40 wt.%. The obtained results are compared with the reference case of the EFIT core loaded with the composite CERCER fuel, where fuel particles are incorporated in a ceramic magnesia matrix. The results of this study offer additional information for the EFIT fuel selection.
Abdelbary, A.; El-gendy, N. A.; Hosny, A.
2012-01-01
Glipizide is an effective antidiabetic agent, however, it suffers from relatively short biological half-life. To solve this encumbrance, it is a prospective candidate for fabricating glipizide extended release microcapsules. Microencapsulation of glipizde with a coat of alginate alone or in combination with chitosan or carbomer 934P was prepared employing ionotropic gelation process. The prepared microcapsules were evaluated in vitro by microscopical examination, determination of the particle size, yield and microencapsulation efficiency. The filled capsules were assessed for content uniformity and drug release characteristics. Stability study of the optimised formulas was carried out at three different temperatures over 12 weeks. In vivo bioavailability study and hypoglycemic activity of C9 microcapsules were done on albino rabbits. All formulas achieved high yield, microencapsulation efficiency and extended t1/2. C9 and C19 microcapsules attained the most optimised results in all tests and complied with the dissolution requirements for extended release dosage forms. These two formulas were selected for stability studies. C9 exhibited longer shelf-life and hence was chosen for in vivo studies. C9 microcapsules showed an improvement in the drug bioavailability and significant hypoglycemic activity compared to immediate release tablets (Minidiab® 5 mg). The optimised microcapsule formulation developed was found to produce extended antidiabetic activity. PMID:23626387
NASA Astrophysics Data System (ADS)
Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.
2017-09-01
In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics. PMID:26295151
Surface similarity-based molecular query-retrieval
Singh, Rahul
2007-01-01
Background Discerning the similarity between molecules is a challenging problem in drug discovery as well as in molecular biology. The importance of this problem is due to the fact that the biochemical characteristics of a molecule are closely related to its structure. Therefore molecular similarity is a key notion in investigations targeting exploration of molecular structural space, query-retrieval in molecular databases, and structure-activity modelling. Determining molecular similarity is related to the choice of molecular representation. Currently, representations with high descriptive power and physical relevance like 3D surface-based descriptors are available. Information from such representations is both surface-based and volumetric. However, most techniques for determining molecular similarity tend to focus on idealized 2D graph-based descriptors due to the complexity that accompanies reasoning with more elaborate representations. Results This paper addresses the problem of determining similarity when molecules are described using complex surface-based representations. It proposes an intrinsic, spherical representation that systematically maps points on a molecular surface to points on a standard coordinate system (a sphere). Molecular surface properties such as shape, field strengths, and effects due to field super-positioningcan then be captured as distributions on the surface of the sphere. Surface-based molecular similarity is subsequently determined by computing the similarity of the surface-property distributions using a novel formulation of histogram-intersection. The similarity formulation is not only sensitive to the 3D distribution of the surface properties, but is also highly efficient to compute. Conclusion The proposed method obviates the computationally expensive step of molecular pose-optimisation, can incorporate conformational variations, and facilitates highly efficient determination of similarity by directly comparing molecular surfaces and surface-based properties. Retrieval performance, applications in structure-activity modeling of complex biological properties, and comparisons with existing research and commercial methods demonstrate the validity and effectiveness of the approach. PMID:17634096
NASA Astrophysics Data System (ADS)
Desnijder, Karel; Hanselaer, Peter; Meuret, Youri
2016-04-01
A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.
Giovino, Concetta; Ayensu, Isaac; Tetteh, John; Boateng, Joshua S
2012-05-30
Mucoadhesive chitosan based films, incorporated with insulin loaded nanoparticles (NPs) made of poly(ethylene glycol)methyl ether-block-polylactide (PEG-b-PLA) have been developed and characterised. Blank-NPs were prepared by double emulsion solvent evaporation technique with varying concentrations of the copolymer (5 and 10%, w/v). The optimised formulation was loaded with insulin (model protein) at initial loadings of 2, 5 and 10% with respect to copolymer weight. The developed NPs were analysed for size, size distribution, surface charge, morphology, encapsulation efficiency and drug release. NPs showing negative (ζ)-potential (<-6 mV) with average diameter> 300 nm and a polydispersity index (P.I.) of ≈ 0.2, irrespective of formulation process, were achieved. Insulin encapsulation efficiencies of 70% and 30% for NPs-Insulin-2 and NPs-Insulin-5 were obtained, respectively. The in vitro release behaviour of both formulations showed a classic biphasic sustained release of protein over 5 weeks which was influenced by pH of the release medium. Optimised chitosan films embedded with 3mg of insulin loaded NPs were produced by solvent casting with homogeneous distribution of NPs in the mucoadhesive matrix, which displayed excellent physico-mechanical properties. The drug delivery system has been designed as a novel platform for potential buccal delivery of macromolecules. Copyright © 2012 Elsevier B.V. All rights reserved.
Blake, S J; McNamara, A L; Vial, P; Holloway, L; Kuncic, Z
2014-11-21
A Monte Carlo model of a novel electronic portal imaging device (EPID) has been developed using Geant4 and its performance for imaging and dosimetry applications in radiotherapy has been characterised. The EPID geometry is based on a physical prototype under ongoing investigation and comprises an array of plastic scintillating fibres in place of the metal plate/phosphor screen in standard EPIDs. Geometrical and optical transport parameters were varied to investigate their impact on imaging and dosimetry performance. Detection efficiency was most sensitive to variations in fibre length, achieving a peak value of 36% at 50 mm using 400 keV x-rays for the lengths considered. Increases in efficiency for longer fibres were partially offset by reductions in sensitivity. Removing the extra-mural absorber surrounding individual fibres severely decreased the modulation transfer function (MTF), highlighting its importance in maximising spatial resolution. Field size response and relative dose profile simulations demonstrated a water-equivalent dose response and thus the prototype's suitability for dosimetry applications. Element-to-element mismatch between scintillating fibres and underlying photodiode pixels resulted in a reduced MTF for high spatial frequencies and quasi-periodic variations in dose profile response. This effect is eliminated when fibres are precisely matched to underlying pixels. Simulations strongly suggest that with further optimisation, this prototype EPID may be capable of simultaneous imaging and dosimetry in radiotherapy.
Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth
NASA Astrophysics Data System (ADS)
Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.
2017-12-01
We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.
Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks
2015-04-01
UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace
NASA Astrophysics Data System (ADS)
Jones, Adam; Utyuzhnikov, Sergey
2017-08-01
Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.
Unterseher, Martin; Schnittler, Martin
2009-05-01
Two cultivation-based isolation techniques - the incubation of leaf fragments (fragment plating) and dilution-to-extinction culturing on malt extract agar - were compared for recovery of foliar endophytic fungi from Fagus sylvatica near Greifswald, north-east Germany. Morphological-anatomical characters of vegetative and sporulating cultures and ITS sequences were used to assign morphotypes and taxonomic information to the isolates. Data analysis included species-accumulation curves, richness estimators, multivariate statistics and null model testing. Fragment plating and extinction culturing were significantly complementary with regard to species composition, because around two-thirds of the 35 fungal taxa were isolated with only one of the two cultivation techniques. The difference in outcomes highlights the need for caution in assessing fungal biodiversity based upon single isolation techniques. The efficiency of cultivation-based studies of fungal endophytes was significantly increased with the combination of the two isolation methods and estimations of species richness, when compared with a 20-years old reference study, which needed three times more isolates with fragment plating to attain the same species richness. Intensified testing and optimisation of extinction culturing in endophyte research is advocated.
Shinozuka, Hiroshi; Cogan, Noel O I; Shinozuka, Maiko; Marshall, Alexis; Kay, Pippa; Lin, Yi-Han; Spangenberg, German C; Forster, John W
2015-04-11
Fragmentation at random nucleotide locations is an essential process for preparation of DNA libraries to be used on massively parallel short-read DNA sequencing platforms. Although instruments for physical shearing, such as the Covaris S2 focused-ultrasonicator system, and products for enzymatic shearing, such as the Nextera technology and NEBNext dsDNA Fragmentase kit, are commercially available, a simple and inexpensive method is desirable for high-throughput sequencing library preparation. MspJI is a recently characterised restriction enzyme which recognises the sequence motif CNNR (where R = G or A) when the first base is modified to 5-methylcytosine or 5-hydroxymethylcytosine. A semi-random enzymatic DNA amplicon fragmentation method was developed based on the unique cleavage properties of MspJI. In this method, random incorporation of 5-methyl-2'-deoxycytidine-5'-triphosphate is achieved through DNA amplification with DNA polymerase, followed by DNA digestion with MspJI. Due to the recognition sequence of the enzyme, DNA amplicons are fragmented in a relatively sequence-independent manner. The size range of the resulting fragments was capable of control through optimisation of 5-methyl-2'-deoxycytidine-5'-triphosphate concentration in the reaction mixture. A library suitable for sequencing using the Illumina MiSeq platform was prepared and processed using the proposed method. Alignment of generated short reads to a reference sequence demonstrated a relatively high level of random fragmentation. The proposed method may be performed with standard laboratory equipment. Although the uniformity of coverage was slightly inferior to the Covaris physical shearing procedure, due to efficiencies of cost and labour, the method may be more suitable than existing approaches for implementation in large-scale sequencing activities, such as bacterial artificial chromosome (BAC)-based genome sequence assembly, pan-genomic studies and locus-targeted genotyping-by-sequencing.
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Parkhill, John; Head-Gordon, Martin
2018-03-01
We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.
Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
Yang, Lingjian; Ainali, Chrysanthi; Tsoka, Sophia; Papageorgiou, Lazaros G
2014-12-05
Applying machine learning methods on microarray gene expression profiles for disease classification problems is a popular method to derive biomarkers, i.e. sets of genes that can predict disease state or outcome. Traditional approaches where expression of genes were treated independently suffer from low prediction accuracy and difficulty of biological interpretation. Current research efforts focus on integrating information on protein interactions through biochemical pathway datasets with expression profiles to propose pathway-based classifiers that can enhance disease diagnosis and prognosis. As most of the pathway activity inference methods in literature are either unsupervised or applied on two-class datasets, there is good scope to address such limitations by proposing novel methodologies. A supervised multiclass pathway activity inference method using optimisation techniques is reported. For each pathway expression dataset, patterns of its constituent genes are summarised into one composite feature, termed pathway activity, and a novel mathematical programming model is proposed to infer this feature as a weighted linear summation of expression of its constituent genes. Gene weights are determined by the optimisation model, in a way that the resulting pathway activity has the optimal discriminative power with regards to disease phenotypes. Classification is then performed on the resulting low-dimensional pathway activity profile. The model was evaluated through a variety of published gene expression profiles that cover different types of disease. We show that not only does it improve classification accuracy, but it can also perform well in multiclass disease datasets, a limitation of other approaches from the literature. Desirable features of the model include the ability to control the maximum number of genes that may participate in determining pathway activity, which may be pre-specified by the user. Overall, this work highlights the potential of building pathway-based multi-phenotype classifiers for accurate disease diagnosis and prognosis problems.
NASA Astrophysics Data System (ADS)
Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.
Peters, Sonja; Kaal, Erwin; Horsting, Iwan; Janssen, Hans-Gerd
2012-02-24
A new method is presented for the analysis of phenolic acids in plasma based on ion-pairing 'Micro-extraction in packed sorbent' (MEPS) coupled on-line to in-liner derivatisation-gas chromatography-mass spectrometry (GC-MS). The ion-pairing reagent served a dual purpose. It was used both to improve extraction yields of the more polar analytes and as the methyl donor in the automated in-liner derivatisation method. In this way, a fully automated procedure for the extraction, derivatisation and injection of a wide range of phenolic acids in plasma samples has been obtained. An extensive optimisation of the extraction and derivatisation procedure has been performed. The entire method showed excellent repeatabilities of under 10% and linearities of 0.99 or better for all phenolic acids. The limits of detection of the optimised method for the majority of phenolic acids were 10ng/mL or lower with three phenolic acids having less-favourable detection limits of around 100 ng/mL. Finally, the newly developed method has been applied in a human intervention trial in which the bioavailability of polyphenols from wine and tea was studied. Forty plasma samples could be analysed within 24h in a fully automated method including sample extraction, derivatisation and gas chromatographic analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
Design and optimisation of novel configurations of stormwater constructed wetlands
NASA Astrophysics Data System (ADS)
Kiiza, Christopher
2017-04-01
Constructed wetlands (CWs) are recognised as a cost-effective technology for wastewater treatment. CWs have been deployed and could be retrofitted into existing urban drainage systems to prevent surface water pollution, attenuate floods and act as sources for reusable water. However, there exist numerous criteria for design configuration and operation of CWs. The aim of the study was to examine effects of design and operational variables on performance of CWs. To achieve this, 8 novel designs of vertical flow CWs were continuously operated and monitored (weekly) for 2years. Pollutant removal efficiency in each CW unit was evaluated from physico-chemical analyses of influent and effluent water samples. Hybrid optimised multi-layer perceptron artificial neural networks (MLP ANNs) were applied to simulate treatment efficiency in the CWs. Subsequently, predictive and analytical models were developed for each design unit. Results show models have sound generalisation abilities; with various design configurations and operational variables influencing performance of CWs. Although some design configurations attained faster and higher removal efficiencies than others; all 8 CW designs produced effluents permissible for discharge into watercourses with strict regulatory standards.
Efficient photoassociation of ultracold cesium atoms with picosecond pulse laser
NASA Astrophysics Data System (ADS)
Hai, Yang; Hu, Xue-Jin; Li, Jing-Lun; Cong, Shu-Lin
2017-08-01
We investigate theoretically the formation of ultracold Cs2 molecules via photoassociation (PA) with three kinds of pulses (the Gaussian pulse, the asymmetric shaped laser pulse SL1 with a large rising time and a small falling time and the asymmetric shaped laser pulse SL2 with a small rising time and a large falling time). For the three kinds of pulses, the final population on vibrational levels from v‧ = 120 to 175 of the excited state displays a regular oscillation change with pulse width and interaction strength, and a high PA efficiency can be achieved with optimised parameters. The PA efficiency in the excited state steered by the SL1-pulse (SL2-pulse) train with optimised parameters which is composed of four SL1 (SL2) pulses is 1.74 times as much as that by the single SL1 (SL2) pulse due to the population accumulation effect. Moreover, a dump laser is employed to transfer the excited molecules from the excited state to the vibrational level v″ = 12 of the ground state to obtain stable molecules.
Saint-Pierre, S
2012-01-01
Over the last few decades, the steady progress achieved in reducing planned exposures of both workers and the public has been admirable in the nuclear sector. However, the disproportionate focus on tiny public exposures and radioactive discharges associated with normal operations came at a high price, and the quasi-denial of a risk of major accident and related weaknesses in emergency preparedness and response came at an even higher price. Fukushima has unfortunately taught us that radiological protection (RP) for emergency and post-emergency situations can be much more than a simple evacuation that lasts 24-48 h, with people returning safely to their homes soon afterwards. On optimisation of emergency and post-emergency exposures, the only 'show in town' in terms of international RP policy improvements has been the issuance of the 2007 Recommendations of the International Commission on Radiological Protection (ICRP). However, no matter how genuine these improvements are, they have not been 'road tested' on the practical reality of severe accidents. Post-Fukushima, there is a compelling case to review the practical adequacy of key RP notions such as optimisation, evacuation, sheltering, and reference levels for workers and the public, and to amend these notions with a view to making the international RP system more useful in the event of a severe accident. On optimisation of planned exposures, the reality is that, nowadays, margins for further reductions of public doses in the nuclear sector are very small, and the smaller the dose, the greater the extra effort needed to reduce the dose further. If sufficient caution is not exercised in the use of RP notions such as dose constraints, there is a real risk of challenging nuclear power technologies beyond safety reasons. For nuclear new build, it is the optimisation of key operational parameters of nuclear power technologies (not RP) that is of paramount importance to improve their overall efficiency. In pursuing further improvements in the international RP system, it should be clearly borne in mind that the system is generally based on protection against the risk of cancer and hereditary diseases. The system also protects against deterministic non-cancer effects on tissues and organs. In seeking refinements of such protective notions, ICRP is invited to pay increased attention to the fact that a continued balance must be struck between beneficial activities that cause exposures and protection. The global nuclear industry is committed to help overcome these key RP issues as part of the RP community's upcoming international deliberations towards a more efficient international RP system. Copyright © 2012. Published by Elsevier Ltd.
Pulsed source of spectrally uncorrelated and indistinguishable photons at telecom wavelengths.
Bruno, N; Martin, A; Guerreiro, T; Sanguinetti, B; Thew, R T
2014-07-14
We report on the generation of indistinguishable photon pairs at telecom wavelengths based on a type-II parametric down conversion process in a periodically poled potassium titanyl phosphate (PPKTP) crystal. The phase matching, pump laser characteristics and coupling geometry are optimised to obtain spectrally uncorrelated photons with high coupling efficiencies. Four photons are generated by a counter-propagating pump in the same crystal and anlysed via two photon interference experiments between photons from each pair source as well as joint spectral and g((2)) measurements. We obtain a spectral purity of 0.91 and coupling efficiencies around 90% for all four photons without any filtering. These pure indistinguishable photon sources at telecom wavelengths are perfectly adapted for quantum network demonstrations and other multi-photon protocols.
NASA Astrophysics Data System (ADS)
Tai, Wei; Abbasi, Mortez; Ricketts, David S.
2018-01-01
We present the analysis and design of high-power millimetre-wave power amplifier (PA) systems using zero-degree combiners (ZDCs). The methodology presented optimises the PA device sizing and the number of combined unit PAs based on device load pull simulations, driver power consumption analysis and loss analysis of the ZDC. Our analysis shows that an optimal number of N-way combined unit PAs leads to the highest power-added efficiency (PAE) for a given output power. To illustrate our design methodology, we designed a 1-W PA system at 45 GHz using a 45 nm silicon-on-insulator process and showed that an 8-way combined PA has the highest PAE that yields simulated output power of 30.6 dBm and 31% peak PAE.
A chaotic model for advertising diffusion problem with competition
NASA Astrophysics Data System (ADS)
Ip, W. H.; Yung, K. L.; Wang, Dingwei
2012-08-01
In this article, the author extends Dawid and Feichtinger's chaotic advertising diffusion model into the duopoly case. A computer simulation system is used to test this enhanced model. Based on the analysis of simulation results, it is found that the best advertising strategy in duopoly is to increase the advertising investment to reach the best Win-Win situation where the oscillation of market portion will not occur. In order to effectively arrive at the best situation, we define a synthetic index and two thresholds. An estimation method for the parameters of the index and thresholds is proposed in this research. We can reach the Win-Win situation by simply selecting the control parameters to make the synthetic index close to the threshold of min-oscillation state. The numerical example and computational results indicated that the proposed chaotic model is useful to describe and analyse advertising diffusion process in duopoly, it is an efficient tool for the selection and optimisation of advertising strategy.
Optimization of microwave-assisted extraction of flavonoids from young barley leaves
NASA Astrophysics Data System (ADS)
Gao, Tian; Zhang, Min; Fang, Zhongxiang; Zhong, Qifeng
2017-01-01
A central composite design combined with response surface methodology was utilized to optimise microwave-assisted extraction of flavonoids from young barley leaves. The results showed that using water as solvent, the optimum conditions of microwave-assisted extraction were extracted twice at 1.27 W g-1 microwave power and liquid-solid ratio 34.02 ml g-1 for 11.12 min. The maximum extraction yield of flavonoids (rutin equivalents) was 80.78±0.52%. Compared with conventional extraction method, the microwave-assisted extraction was more efficient as the extraction time was only 6.18% of conventional extraction, but the extraction yield of flavonoids was increased by 5.47%. The main flavonoid components from the young barley leaf extract were probably 33.36% of isoorientin-7-O-glueoside and 54.17% of isovitexin-7-O-glucoside, based on the HPLC-MS analysis. The barley leaf extract exhibited strong reducing power as well as the DPPH radical scavenging capacity.
Hadjmohammadi, Mohammadreza; Karimiyan, Hanieh; Sharifi, Vahid
2013-11-15
A simple, inexpensive and efficient three phase hollow fibre liquid phase microextraction (HF-LPME) technique combined with HPLC was used for the simultaneous determination of flavonoids in Echinophora platyloba DC. and Mentha piperita. Different factors affecting the HF-LPME procedure were investigated and optimised. The optimised extraction conditions were as follows: 1-octanol as an organic solvent, pHdonor=2, pHacceptor=9.75, stirring rate of 1000rpm, extraction time of 80min, without addition of salt. Under these conditions, the enrichment factors ranged between 146 and 311. The values of intra and inter-day relative standard deviations (RSD) were in the range of 3.18-6.00% and 7.25-11.00%, respectively. The limits of detection (LODs) ranged between 0.5 and 7.0ngmL(-1). Among the investigated flavonoids quercetin was found in E. platyloba DC. and luteolin was found in M. piperita. Concentration of quercetin and luteolin was 0.015 and 0.025mgg(-1) respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cheong, Vee San; Bull, Anthony M J
2015-12-16
The choice of coordinate system and alignment of bone will affect the quantification of mechanical properties obtained during in-vitro biomechanical testing. Where these are used in predictive models, such as finite element analysis, the fidelic description of these properties is paramount. Currently in bending and torsional tests, bones are aligned on a pre-defined fixed span based on the reference system marked out. However, large inter-specimen differences have been reported. This suggests a need for the development of a specimen-specific alignment system for use in experimental work. Eleven ovine tibiae were used in this study and three-dimensional surface meshes were constructed from micro-Computed Tomography scan images. A novel, semi-automated algorithm was developed and applied to the surface meshes to align the whole bone based on its calculated principal directions. Thereafter, the code isolates the optimised location and length of each bone for experimental testing. This resulted in a lowering of the second moment of area about the chosen bending axis in the central region. More importantly, the optimisation method decreases the irregularity of the shape of the cross-sectional slices as the unbiased estimate of the population coefficient of variation of the second moment of area decreased from a range of (0.210-0.435) to (0.145-0.317) in the longitudinal direction, indicating a minimisation of the product moment, which causes eccentric loading. Thus, this methodology serves as an important pre-step to align the bone for mechanical tests or simulation work, is optimised for each specimen, ensures repeatability, and is general enough to be applied to any long bone. Copyright © 2015 Elsevier Ltd. All rights reserved.
Heat Transfer Modelling of Glass Media within TPV Systems
NASA Astrophysics Data System (ADS)
Bauer, Thomas; Forbes, Ian; Penlington, Roger; Pearsall, Nicola
2004-11-01
Understanding and optimisation of heat transfer, and in particular radiative heat transfer in terms of spectral, angular and spatial radiation distributions is important to achieve high system efficiencies and high electrical power densities for thermophtovoltaics (TPV). This work reviews heat transfer models and uses the Discrete Ordinates method. Firstly one-dimensional heat transfer in fused silica (quartz glass) shields was examined for the common arrangement, radiator-air-glass-air-PV cell. It has been concluded that an alternative arrangement radiator-glass-air-PV cell with increased thickness of fused silica should have advantages in terms of improved transmission of convertible radiation and enhanced suppression of non-convertible radiation.
rPM6 parameters for phosphorous and sulphur-containing open-shell molecules
NASA Astrophysics Data System (ADS)
Saito, Toru; Takano, Yu
2018-03-01
In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
Minimising back reflections from the common path objective in a fundus camera
NASA Astrophysics Data System (ADS)
Swat, A.
2016-11-01
Eliminating back reflections is critical in the design of a fundus camera with internal illuminating system. As there is very little light reflected from the retina, even excellent antireflective coatings are not sufficient suppression of ghost reflections, therefore the number of surfaces in the common optics in illuminating and imaging paths shall be minimised. Typically a single aspheric objective is used. In the paper an alternative approach, an objective with all spherical surfaces, is presented. As more surfaces are required, more sophisticated method is needed to get rid of back reflections. Typically back reflections analysis, comprise treating subsequent objective surfaces as mirrors, and reflections from the objective surfaces are traced back through the imaging path. This approach can be applied in both sequential and nonsequential ray tracing. It is good enough for system check but not very suitable for early optimisation process in the optical system design phase. There are also available standard ghost control merit function operands in the sequential ray-trace, for example in Zemax system, but these don't allow back ray-trace in an alternative optical path, illumination vs. imaging. What is proposed in the paper, is a complete method to incorporate ghost reflected energy into the raytracing system merit function for sequential mode which is more efficient in optimisation process. Although developed for the purpose of specific case of fundus camera, the method might be utilised in a wider range of applications where ghost control is critical.
A simplified protocol for molecular identification of Eimeria species in field samples.
Haug, Anita; Thebo, Per; Mattsson, Jens G
2007-05-15
This study aimed to find a fast, sensitive and efficient protocol for molecular identification of chicken Eimeria spp. in field samples. Various methods for each of the three steps of the protocol were evaluated: oocyst wall rupturing methods, DNA extraction methods, and identification of species-specific DNA sequences by PCR. We then compared and evaluated five complete protocols. Three series of oocyst suspensions of known number of oocysts from Eimeria mitis, Eimeria praecox, Eimeria maxima and Eimeria tenella were prepared and ground using glass beads or mini-pestle. DNA was extracted from ruptured oocysts using commercial systems (GeneReleaser, Qiagen Stoolkit and Prepman) or phenol-chloroform DNA extraction, followed by identification of species-specific ITS-1 sequences by optimised single species PCR assays. The Stoolkit and Prepman protocols showed insufficient repeatability, and the former was also expensive and relatively time-consuming. In contrast, both the GeneReleaser protocol and phenol-chloroform protocols were robust and sensitive, detecting less than 0.4 oocysts of each species per PCR. Finally, we evaluated our new protocol on 68 coccidia positive field samples. Our data suggests that rupturing the oocysts by mini-pestle grinding, preparing the DNA with GeneReleaser, followed by optimised single species PCR assays, makes a robust and sensitive procedure for identifying chicken Eimeria species in field samples. Importantly, it also provides minimal hands-on-time in the pre-PCR process, lower contamination risk and no handling of toxic chemicals.
NASA Astrophysics Data System (ADS)
Tingberg, Anders Martin
Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.
Space use optimisation and sustainability-environmental assessment of space use concepts.
van den Dobbelsteen, Andy; de Wilde, Sebastiaan
2004-11-01
In this paper, as part of a diptych, we discuss the factor space as a means of improving the environmental performance of building projects. There are indicators for space use efficiency and several more or less broadly supported methods for assessment of environmental issues such as ecological quality, use of building materials and energy consumption. These are discussed in this paper. Assessment methods coupling space use to environmental indicators had not been available until now. Beforehand, plans with different spatial properties could therefore not be environmentally compared. We present a method for the implementation of space use in assessments concerning sustainability. This method was applied to the urban case study presented in our second paper in this journal. In this paper, we also present solutions for improved environmental performance through intensive and multiple use of space in the second, third and fourth dimension.
Optimisation of cavity parameters for lasers based on AlGaInAsP/InP solid solutions (λ = 1470 nm)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselov, D A; Ayusheva, K R; Shashkin, I S
2015-10-31
We have studied the effect of laser cavity parameters on the light–current characteristics of lasers based on the AlGaInAs/GaInAsP/InP solid solution system that emit in the spectral range 1400 – 1600 nm. It has been shown that optimisation of cavity parameters (chip length and front facet reflectivity) allows one to improve heat removal from the laser, without changing other laser characteristics. An increase in the maximum output optical power of the laser by 0.5 W has been demonstrated due to cavity design optimisation. (lasers)
Close packing in curved space by simulated annealing
NASA Astrophysics Data System (ADS)
Wille, L. T.
1987-12-01
The problem of packing spheres of a maximum radius on the surface of a four-dimensional hypersphere is considered. It is shown how near-optimal solutions can be obtained by packing soft spheres, modelled as classical particles interacting under an inverse power potential, followed by a subsequent hardening of the interaction. In order to avoid trapping in high-lying local minima, the simulated annealing method is used to optimise the soft-sphere packing. Several improvements over other work (based on local optimisation of random initial configurations of hard spheres) have been found. The freezing behaviour of this system is discussed as a function of particle number, softness of the potential and cooling rate. Apart from their geometric interest, these results are useful in the study of topological frustration, metallic glasses and quasicrystals.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
Solar/UV-induced photocatalytic degradation of three commercial textile dyes.
Neppolian, B; Choi, H C; Sakthivel, S; Arabindoo, Banumathi; Murugesan, V
2002-01-28
The photocatalytic degradation of three commercial textile dyes with different structure has been investigated using TiO(2) (Degussa P25) photocatalyst in aqueous solution under solar irradiation. Experiments were conducted to optimise various parameters viz. amount of catalyst, concentration of dye, pH and solar light intensity. Degradation of all the dyes were examined by using chemical oxygen demand (COD) method. The degradation efficiency of the three dyes is as follows: Reactive Yellow 17(RY17) > Reactive Red 2(RR2) > Reactive Blue 4 (RB4), respectively. The experimental results indicate that TiO(2) (Degussa P25) is the best catalyst in comparison with other commercial photocatalysts such as, TiO(2) (Merck), ZnO, ZrO(2), WO(3) and CdS. Though the UV irradiation can efficiently degrade the dyes, naturally abundant solar irradiation is also very effective in the mineralisation of dyes. The comparison between thin-film coating and aqueous slurry method reveals that slurry method is more efficient than coating but the problems of leaching and the requirement of separation can be avoided by using coating technique. These observations indicate that all the three dyes could be degraded completely at different time intervals. Hence, it may be a viable technique for the safe disposal of textile wastewater into the water streams.
Modelling soil water retention using support vector machines with genetic algorithm optimisation.
Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L
2014-01-01
This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.
Optimisation of a parallel ocean general circulation model
NASA Astrophysics Data System (ADS)
Beare, M. I.; Stevens, D. P.
1997-10-01
This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
Almén, Anja; Båth, Magnus
2016-06-01
The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Synthesis and characterisation of PEG modified chitosan nanocapsules loaded with thymoquinone.
Vignesh Kumar, Suresh Kumar; Renuka Devi, Ponnuswamy; Harish, Saru; Hemananthan, Eswaran
2017-02-01
Thymoquinone (TQ), a major bioactive compound of Nigella sativa seeds has several therapeutic properties. The main drawback in bringing TQ to therapeutic application is that it has poor stability and bioavailability. Hence a suitable carrier is essential for TQ delivery. Recent studies indicate biodegradable polymers are potentially good carriers of bioactive compounds. In this study, polyethylene glycol (PEG) modified chitosan (Cs) nanocapsules were developed as a carrier for TQ. Aqueous soluble low molecular weight Cs and PEG was selected among different biodegradable polymers based on their biocompatibility and efficacy as a carrier. Optimisation of synthesis of nanocapsules was done based on particle size, PDI, encapsulation efficiency and process yield. A positive zeta potential value of +48 mV, indicating good stability was observed. Scanning electron microscope and atomic-force microscopy analysis revealed spherical shaped and smooth surfaced nanocapsules with size between 100 to 300 nm. The molecular dispersion of the TQ in Cs PEG nanocapsules was studied using X-ray powder diffraction. The Fourier transform infrared spectrum of optimised nanocapsule exhibited functional groups of both polymer and drug, confirming the presence of Cs, PEG and TQ. In vitro drug release studies showed that PEG modified Cs nanocapsules loaded with TQ had a slow and sustained release.
Song, Jing-Zheng; Han, Quan-Bin; Qiao, Chun-Feng; But, Paul Pui-Hay; Xu, Hong-Xi
2010-01-01
Aconites, with aconite alkaloids as the major therapeutic and toxic components, are used for the treatment of analgesic, antirheumatic and neurological symptoms. Quantification of the aconite alkaloids is important for the quality control of aconite-containing drugs. To establish a validated capillary zone electrophoresis (CZE) method for the simultaneous determination of six major alkaloids, namely aconitine, mesaconitine, hypaconitine, benzoylaconine, benzoylmesaconine and benzoylhypaconine, in crude and processed aconite roots. The CZE method was optimised and validated using a stability-indicating method. The optimised running buffer was a mixture of 200 mm Tris, 150 mm perchloric acid and 40% 1,4-dioxane (pH 7.8) with the capillary thermostated at 25 degrees C. Using the optimised method, six aconite alkaloids were well separated. The established method showed good precision, accuracy and recovery. Contents of these alkaloids in crude and processed aconites were determined and it was observed that the levels of individual alkaloids varied between samples. The developed CZE method was reliable for the quality control of aconites contained in herbal medicines. The method could also be used as an approach for toxicological studies.
Optimisation of DNA extraction from the crustacean Daphnia
Athanasio, Camila Gonçalves; Chipman, James K.; Viant, Mark R.
2016-01-01
Daphnia are key model organisms for mechanistic studies of phenotypic plasticity, adaptation and microevolution, which have led to an increasing demand for genomics resources. A key step in any genomics analysis, such as high-throughput sequencing, is the availability of sufficient and high quality DNA. Although commercial kits exist to extract genomic DNA from several species, preparation of high quality DNA from Daphnia spp. and other chitinous species can be challenging. Here, we optimise methods for tissue homogenisation, DNA extraction and quantification customised for different downstream analyses (e.g., LC-MS/MS, Hiseq, mate pair sequencing or Nanopore). We demonstrate that if Daphnia magna are homogenised as whole animals (including the carapace), absorbance-based DNA quantification methods significantly over-estimate the amount of DNA, resulting in using insufficient starting material for experiments, such as preparation of sequencing libraries. This is attributed to the high refractive index of chitin in Daphnia’s carapace at 260 nm. Therefore, unless the carapace is removed by overnight proteinase digestion, the extracted DNA should be quantified with fluorescence-based methods. However, overnight proteinase digestion will result in partial fragmentation of DNA therefore the prepared DNA is not suitable for downstream methods that require high molecular weight DNA, such as PacBio, mate pair sequencing and Nanopore. In conclusion, we found that the MasterPure DNA purification kit, coupled with grinding of frozen tissue, is the best method for extraction of high molecular weight DNA as long as the extracted DNA is quantified with fluorescence-based methods. This method generated high yield and high molecular weight DNA (3.10 ± 0.63 ng/µg dry mass, fragments >60 kb), free of organic contaminants (phenol, chloroform) and is suitable for large number of downstream analyses. PMID:27190714
GilPavas, E; Dobrosz-Gómez, I; Gómez-García, M Á
2011-01-01
The capacity of the electro-coagulation (EC) process for the treatment of the wastewater containing Cr3+, resulting from a leather tannery industry placed in Medellin (Colombia), was evaluated. In order to assess the effect of some parameters, such as: the electrode type (Al and/or Fe), the distance between electrodes, the current density, the stirring velocity, and the initial Cr3+ concentration on its efficiency of removal (%RCr+3), a multifactorial experimental design was used. The %RCr3+ was defined as the response variable for the statistical analysis. In order to optimise the operational values for the chosen parameters, the response surface method (RSM) was applied. Additionally, the Biological Oxygen Demand (BOD5), the Chemical Oxygen Demand (COD), and the Total Organic Carbon (TOC) were monitored during the EC process. The electrodes made of aluminium appeared to be the most effective in the chromium removal from the wastewater under study. At pH equal to 4.52 and at 28°C, the optimal conditions of Cr3+ removal using the EC process were found, as follows: the initial Cr3+ concentration=3,596 mg/L, the electrode gap=0.5 cm, the stirring velocity=382.3 rpm, and the current density=57.87 mA/cm2. At those conditions, it was possible to reach 99.76% of Cr3+ removal, and 64% and 61% of mineralisation (TOC) and COD removal, respectively. A kinetic analysis was performed in order to verify the response capacity of the EC process at optimised parameter values.
NASA Astrophysics Data System (ADS)
Hall, D. J.; Skottfelt, J.; Soman, M. R.; Bush, N.; Holland, A.
2017-12-01
Charge-Coupled Devices (CCDs) have been the detector of choice for imaging and spectroscopy in space missions for several decades, such as those being used for the Euclid VIS instrument and baselined for the SMILE SXI. Despite the many positive properties of CCDs, such as the high quantum efficiency and low noise, when used in a space environment the detectors suffer damage from the often-harsh radiation environment. High energy particles can create defects in the silicon lattice which act to trap the signal electrons being transferred through the device, reducing the signal measured and effectively increasing the noise. We can reduce the impact of radiation on the devices through four key methods: increased radiation shielding, device design considerations, optimisation of operating conditions, and image correction. Here, we concentrate on device design operations, investigating the impact of narrowing the charge-transfer channel in the device with the aim of minimising the impact of traps during readout. Previous studies for the Euclid VIS instrument considered two devices, the e2v CCD204 and CCD273, the serial register of the former having a 50 μm channel and the latter having a 20 μm channel. The reduction in channel width was previously modelled to give an approximate 1.6× reduction in charge storage volume, verified experimentally to have a reduction in charge transfer inefficiency of 1.7×. The methods used to simulate the reduction approximated the charge cloud to a sharp-edged volume within which the probability of capture by traps was 100%. For high signals and slow readout speeds, this is a reasonable approximation. However, for low signals and higher readout speeds, the approximation falls short. Here we discuss a new method of simulating and calculating charge storage variations with device design changes, considering the absolute probability of capture across the pixel, bringing validity to all signal sizes and readout speeds. Using this method, we can optimise the device design to suffer minimum impact from radiation damage effects, here using detector development for the SMILE mission to demonstrate the process.
Moss and peat hydraulic properties are optimized to maximise peatland water use efficiency
NASA Astrophysics Data System (ADS)
Kettridge, Nicholas; Tilak, Amey; Devito, Kevin; Petrone, Rich; Mendoza, Carl; Waddington, Mike
2016-04-01
Peatland ecosystems are globally important carbon and terrestrial surface water stores that have formed over millennia. These ecosystems have likely optimised their ecohydrological function over the long-term development of their soil hydraulic properties. Through a theoretical ecosystem approach, applying hydrological modelling integrated with known ecological thresholds and concepts, the optimisation of peat hydraulic properties is examined to determine which of the following conditions peatland ecosystems target during this development: i) maximise carbon accumulation, ii) maximise water storage, or iii) balance carbon profit across hydrological disturbances. Saturated hydraulic conductivity (Ks) and empirical van Genuchten water retention parameter α are shown to provide a first order control on simulated water tensions. Across parameter space, peat profiles with hypothetical combinations of Ks and α show a strong binary tendency towards targeting either water or carbon storage. Actual hydraulic properties from five northern peatlands fall at the interface between these goals, balancing the competing demands of carbon accumulation and water storage. We argue that peat hydraulic properties are thus optimized to maximise water use efficiency and that this optimisation occurs over a centennial to millennial timescale as the peatland develops. This provides a new conceptual framework to characterise peat hydraulic properties across climate zones and between a range of different disturbances, and which can be used to provide benchmarks for peatland design and reclamation.
Synthesis of concentric circular antenna arrays using dragonfly algorithm
NASA Astrophysics Data System (ADS)
Babayigit, B.
2018-05-01
Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.
Parameters estimation for reactive transport: A way to test the validity of a reactive model
NASA Astrophysics Data System (ADS)
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
Distributed convex optimisation with event-triggered communication in networked systems
NASA Astrophysics Data System (ADS)
Liu, Jiayun; Chen, Weisheng
2016-12-01
This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.
Optimisation of quantitative lung SPECT applied to mild COPD: a software phantom simulation study.
Norberg, Pernilla; Olsson, Anna; Alm Carlsson, Gudrun; Sandborg, Michael; Gustafsson, Agnetha
2015-01-01
The amount of inhomogeneities in a (99m)Tc Technegas single-photon emission computed tomography (SPECT) lung image, caused by reduced ventilation in lung regions affected by chronic obstructive pulmonary disease (COPD), is correlated to disease advancement. A quantitative analysis method, the CVT method, measuring these inhomogeneities was proposed in earlier work. To detect mild COPD, which is a difficult task, optimised parameter values are needed. In this work, the CVT method was optimised with respect to the parameter values of acquisition, reconstruction and analysis. The ordered subset expectation maximisation (OSEM) algorithm was used for reconstructing the lung SPECT images. As a first step towards clinical application of the CVT method in detecting mild COPD, this study was based on simulated SPECT images of an advanced anthropomorphic lung software phantom including respiratory and cardiac motion, where the mild COPD lung had an overall ventilation reduction of 5%. The best separation between healthy and mild COPD lung images as determined using the CVT measure of ventilation inhomogeneity and 125 MBq (99m)Tc was obtained using a low-energy high-resolution collimator (LEHR) and a power 6 Butterworth post-filter with a cutoff frequency of 0.6 to 0.7 cm(-1). Sixty-four reconstruction updates and a small kernel size should be used when the whole lung is analysed, and for the reduced lung a greater number of updates and a larger kernel size are needed. A LEHR collimator and 125 (99m)Tc MBq together with an optimal combination of cutoff frequency, number of updates and kernel size, gave the best result. Suboptimal selections of either cutoff frequency, number of updates and kernel size will reduce the imaging system's ability to detect mild COPD in the lung phantom.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Solar cells based on electrodeposited thin films of ZnS, CdS, CdSSe and CdTe
NASA Astrophysics Data System (ADS)
Weerasinghe, Ajith R.
The motivations of this research were to produce increased efficiency and low-cost solar cells. The production efficiency of Si solar cells has almost reached their theoretical limit, and reducing the manufacturing cost of Si solar cells is difficult to achieve due to the high-energy usage in material purifying and processing stages. Due to the low usage of materials and input energy, thin film solar cells have the potential to reduce the costs. CdS/CdTe thin film solar cells are already the cheapest on $/W basis. The cost of CdTe solar cells can be further reduced if all the semiconducting layers are fabricated using the electrodeposition (ED) method. ED method is scalable, low in the usage of energy and raw materials. These benefits lead to the cost effective production of semiconductors. The conventional method of fabricating CdS layers produces Cd containing waste solutions routinely, which adds to the cost of solar cells.ZnS, CdS and CdS(i-X)Sex buffer and window layers and CdTe absorber layers have been successfully electrodeposited and explored under this research investigation. These layers were fully characterised using complementary techniques to evaluate the material properties. Photoelectrochemical (PEC) studies, optical absorption, X-ray diffraction (XRD), X-ray fluorescence (XRF), scanning electron microscopy (SEM), energy-dispersive X-ray (EDX) spectroscopy, atomic force microscopy (AFM) and Raman spectroscopy were utilised to evaluate the material properties of these solid thin film layers. ZnS and CdS thin film layers were electrodeposited from Na-free chemical precursors to avoid the group I element (Na) to reduce deterioration of CdTe devices. Deposition parameters such as, growth substrates, temperature, pH, growth cathodic voltage, stirring rate, time and chemical concentrations were identified to fabricate the above semiconductors. To further optimise these layers, a heat treatment process specific to the material was developed. In addition, the deposition parameters of CdTe layers were further optimised. This research programme has demonstrated that electrodeposited ZnS, CdS and CdTe thin film layers have material characteristics comparable with those of the materials reported in the literature and can be used in thin film solar cell devices. Furthermore, the electrolytes were used for up to two years, reducing the wastage even further, in comparison to other fabrication methods, such as chemical bath deposition. Several large-area semiconducting layers were successfully fabricated to test the scalability of the method. Nano-rods perpendicular to the glass/FTO surface with gaps among grains in CdS layers were observed. In order to reduce the possible pinholes due the gaps, a deposition of a semiconducting layer to cover completely the substrate was investigated. CdS(i-X)Sex layers were investigated to produce a layer-by-layer deposition of the material. However it was observed the surface morphology of CdS(j.X)Sex is a function of the growth parameters which produced nano-wires, nano-tubes and nano-sheets. This is the first recording of this effect for a low temperature deposition method, minimising the cost of producing this highly photosensitive material for use in various nano technology applications.The basic structure experimented was glass/conducting-glass/buffer layer/window material/absorber material/metal. By utilising all the semiconducting layers developed, several solar cell device structures were designed, fabricated and tested. This included a novel all-electrodeposited multi-layer graded bandgap device, to enhance the absorption of solar photons. The device efficiencies varied from batch to batch, and efficiencies in the range (3-7)% were observed. The variations in chemical concentrations, surface states and the presence of pin-hole defects in CdS were the main reasons for the range of efficiencies obtained. In the future work section, ways to avoid these variations and to increase efficiencies are identified and presented.
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Vršanská, Martina; Voběrková, Stanislava; Jiménez Jiménez, Ana María; Strmiska, Vladislav; Adam, Vojtěch
2017-01-01
The key to obtaining an optimum performance of an enzyme is often a question of devising a suitable enzyme and optimisation of conditions for its immobilization. In this study, laccases from the native isolates of white rot fungi Fomes fomentarius and/or Trametes versicolor, obtained from Czech forests, were used. From these, cross-linked enzyme aggregates (CLEA) were prepared and characterised when the experimental conditions were optimized. Based on the optimization steps, saturated ammonium sulphate solution (75 wt.%) was used as the precipitating agent, and different concentrations of glutaraldehyde as a cross-linking agent were investigated. CLEA aggregates formed under the optimal conditions showed higher catalytic efficiency and stabilities (thermal, pH, and storage, against denaturation) as well as high reusability compared to free laccase for both fungal strains. The best concentration of glutaraldehyde seemed to be 50 mM and higher efficiency of cross-linking was observed at a low temperature 4 °C. An insignificant increase in optimum pH for CLEA laccases with respect to free laccases for both fungi was observed. The results show that the optimum temperature for both free laccase and CLEA laccase was 35 °C for T. versicolor and 30 °C for F. fomentarius. The CLEAs retained 80% of their initial activity for Trametes and 74% for Fomes after 70 days of cultivation. Prepared cross-linked enzyme aggregates were also investigated for their decolourisation activity on malachite green, bromothymol blue, and methyl red dyes. Immobilised CLEA laccase from Trametes versicolor showed 95% decolourisation potential and CLEA from Fomes fomentarius demonstrated 90% decolourisation efficiency within 10 h for all dyes used. These results suggest that these CLEAs have promising potential in dye decolourisation. PMID:29295505
Dalvadi, Hitesh; Patel, Nikita; Parmar, Komal
2017-05-01
The aim of present investigation is to improve dissolution rate of poor soluble drug Zotepine by a self-microemulsifying drug delivery system (SMEDDS). Ternary phase diagram with oil (Oleic acid), surfactant (Tween 80) and co-surfactant (PEG 400) at apex were used to identify the efficient self-microemulsifying region. Box-Behnken design was implemented to study the influence of independent variables. Principal Component Analysis was used for scrutinising critical variables. The liquid SMEDDS were characterised for macroscopic evaluation, % Transmission, emulsification time and in vitro drug release studies. Optimised formulation OL1 was converted in to S-SMEDDS by using Aerosil ® 200 as an adsorbent in the ratio of 3:1. The S-SMEDDS was characterised by SEM, DSC, globule size (152.1 nm), zeta-potential (-28.1 mV), % transmission study (98.75%), in vitro release (86.57%) at 30 min. The optimised solid SMEDDS formulation showed faster drug release properties as compared to conventional tablet of Zotepine.
Bokhari, Awais; Chuah, Lai Fatt; Yusup, Suzana; Klemeš, Jiří Jaromír; Kamil, Ruzaimah Nik M
2016-01-01
Pretreatment of the high free fatty acid rubber seed oil (RSO) via esterification reaction has been investigated by using a pilot scale hydrodynamic cavitation (HC) reactor. Four newly designed orifice plate geometries are studied. Cavities are induced by assisted double diaphragm pump in the range of 1-3.5 bar inlet pressure. An optimised plate with 21 holes of 1mm diameter and inlet pressure of 3 bar resulted in RSO acid value reduction from 72.36 to 2.64 mg KOH/g within 30 min of reaction time. Reaction parameters have been optimised by using response surface methodology and found as methanol to oil ratio of 6:1, catalyst concentration of 8 wt%, reaction time of 30 min and reaction temperature of 55°C. The reaction time and esterified efficiency of HC was three fold shorter and four fold higher than mechanical stirring. This makes the HC process more environmental friendly. Copyright © 2015 Elsevier Ltd. All rights reserved.
Restivo, Annalaura; Degano, Ilaria; Ribechini, Erika; Colombini, Maria Perla
2014-01-01
A method for the HPLC-MS/MS analysis of phenols, including phenolic acids and naphtoquinones, using an amide-embedded phase column was developed and compared to the literature methods based on classical C18 stationary phase columns. RP-Amide is a recently developed polar embedded stationary phase, whose wetting properties mean that up to 100% water can be used as an eluent. The increased retention and selectivity for polar compounds and the possibility of working in 100% water conditions make this column particularly interesting for the HPLC analysis of phenolic acids and derivatives. In this study, the chromatographic separation was optimised on an HPLC-DAD, and was used to separate 13 standard phenolic acids and derivatives. The method was validated on an HPLC-ESI-Q-ToF. The acquisition was performed in negative polarity and MS/MS target mode. Ionisation conditions and acquisition parameters for the Q-ToF detector were investigated by working on collision energies and fragmentor potentials. The performance of the method was fully evaluated on standards. Moreover, several raw materials containing phenols were analysed: walnut, gall, wine, malbec grape, French oak, red henna and propolis. Our method allowed us to characterize the phenolic composition in a wide range of matrices and to highlight possible matrix effects.
High-throughput assay for optimising microbial biological control agent production and delivery
USDA-ARS?s Scientific Manuscript database
Lack of technologies to produce and deliver effective biological control agents (BCAs) is a major barrier to their commercialization. A myriad of variables associated with BCA cultivation, formulation, drying, storage, and reconstitution processes complicates agent quality maximization. An efficie...
Optimising productivity, quality and efficiency in community nursing.
Holland, Agi; McIntosh, Brian
2012-08-01
By 2014 the NHS is expected to make £21 billion in efficiency savings and increase productivity by 6% per annum, while maintaining or improving the quality of care. Given that the cost of the 1.7 million strong workforce represents 60% of the NHS budget, changes are likely. This context of innovation and cost-effectiveness has resulted in an ever greater emphasis to fully engage and support community nursing.
New Trends in Forging Technologies
NASA Astrophysics Data System (ADS)
Behrens, B.-A.; Hagen, T.; Knigge, J.; Elgaly, I.; Hadifi, T.; Bouguecha, A.
2011-05-01
Limited natural resources increase the demand on highly efficient machinery and transportation means. New energy-saving mobility concepts call for design optimisation through downsizing of components and choice of corrosion resistant materials possessing high strength to density ratios. Component downsizing can be performed either by constructive structural optimisation or by substituting heavy materials with lighter high-strength ones. In this context, forging plays an important role in manufacturing load-optimised structural components. At the Institute of Metal Forming and Metal-Forming Machines (IFUM) various innovative forging technologies have been developed. With regard to structural optimisation, different strategies for localised reinforcement of components were investigated. Locally induced strain hardening by means of cold forging under a superimposed hydrostatic pressure could be realised. In addition, controlled martensitic zones could be created through forming induced phase conversion in metastable austenitic steels. Other research focused on the replacement of heavy steel parts with high-strength nonferrous alloys or hybrid material compounds. Several forging processes of magnesium, aluminium and titanium alloys for different aeronautical and automotive applications were developed. The whole process chain from material characterisation via simulation-based process design to the production of the parts has been considered. The feasibility of forging complex shaped geometries using these alloys was confirmed. In spite of the difficulties encountered due to machine noise and high temperature, acoustic emission (AE) technique has been successfully applied for online monitoring of forging defects. New AE analysis algorithm has been developed, so that different signal patterns due to various events such as product/die cracking or die wear could be detected and classified. Further, the feasibility of the mentioned forging technologies was proven by means of the finite element analysis (FEA). For example, the integrity of forging dies with respect to crack initiation due to thermo-mechanical fatigue as well as the ductile damage of forgings was investigated with the help of cumulative damage models. In this paper some of the mentioned approaches are described.
Strods, Arnis; Ose, Velta; Bogans, Janis; Cielens, Indulis; Kalnins, Gints; Radovica, Ilze; Kazaks, Andris; Pumpens, Paul; Renhofa, Regina
2015-06-26
Hepatitis B virus (HBV) core (HBc) virus-like particles (VLPs) are one of the most powerful protein engineering tools utilised to expose immunological epitopes and/or cell-targeting signals and for the packaging of genetic material and immune stimulatory sequences. Although HBc VLPs and their numerous derivatives are produced in highly efficient bacterial and yeast expression systems, the existing purification and packaging protocols are not sufficiently optimised and standardised. Here, a simple alkaline treatment method was employed for the complete removal of internal RNA from bacteria- and yeast-produced HBc VLPs and for the conversion of these VLPs into empty particles, without any damage to the VLP structure. The empty HBc VLPs were able to effectively package the added DNA and RNA sequences. Furthermore, the alkaline hydrolysis technology appeared efficient for the purification and packaging of four different HBc variants carrying lysine residues on the HBc VLP spikes. Utilising the introduced lysine residues and the intrinsic aspartic and glutamic acid residues exposed on the tips of the HBc spikes for chemical coupling of the chosen peptide and/or nucleic acid sequences ensured a standard and easy protocol for the further development of versatile HBc VLP-based vaccine and gene therapy applications.
Strods, Arnis; Ose, Velta; Bogans, Janis; Cielens, Indulis; Kalnins, Gints; Radovica, Ilze; Kazaks, Andris; Pumpens, Paul; Renhofa, Regina
2015-01-01
Hepatitis B virus (HBV) core (HBc) virus-like particles (VLPs) are one of the most powerful protein engineering tools utilised to expose immunological epitopes and/or cell-targeting signals and for the packaging of genetic material and immune stimulatory sequences. Although HBc VLPs and their numerous derivatives are produced in highly efficient bacterial and yeast expression systems, the existing purification and packaging protocols are not sufficiently optimised and standardised. Here, a simple alkaline treatment method was employed for the complete removal of internal RNA from bacteria- and yeast-produced HBc VLPs and for the conversion of these VLPs into empty particles, without any damage to the VLP structure. The empty HBc VLPs were able to effectively package the added DNA and RNA sequences. Furthermore, the alkaline hydrolysis technology appeared efficient for the purification and packaging of four different HBc variants carrying lysine residues on the HBc VLP spikes. Utilising the introduced lysine residues and the intrinsic aspartic and glutamic acid residues exposed on the tips of the HBc spikes for chemical coupling of the chosen peptide and/or nucleic acid sequences ensured a standard and easy protocol for the further development of versatile HBc VLP-based vaccine and gene therapy applications. PMID:26113394
NASA Astrophysics Data System (ADS)
Strods, Arnis; Ose, Velta; Bogans, Janis; Cielens, Indulis; Kalnins, Gints; Radovica, Ilze; Kazaks, Andris; Pumpens, Paul; Renhofa, Regina
2015-06-01
Hepatitis B virus (HBV) core (HBc) virus-like particles (VLPs) are one of the most powerful protein engineering tools utilised to expose immunological epitopes and/or cell-targeting signals and for the packaging of genetic material and immune stimulatory sequences. Although HBc VLPs and their numerous derivatives are produced in highly efficient bacterial and yeast expression systems, the existing purification and packaging protocols are not sufficiently optimised and standardised. Here, a simple alkaline treatment method was employed for the complete removal of internal RNA from bacteria- and yeast-produced HBc VLPs and for the conversion of these VLPs into empty particles, without any damage to the VLP structure. The empty HBc VLPs were able to effectively package the added DNA and RNA sequences. Furthermore, the alkaline hydrolysis technology appeared efficient for the purification and packaging of four different HBc variants carrying lysine residues on the HBc VLP spikes. Utilising the introduced lysine residues and the intrinsic aspartic and glutamic acid residues exposed on the tips of the HBc spikes for chemical coupling of the chosen peptide and/or nucleic acid sequences ensured a standard and easy protocol for the further development of versatile HBc VLP-based vaccine and gene therapy applications.
Henczka, Marek; Djas, Małgorzata; Filipek, Katarzyna
2013-01-01
A direct plating method for the detection and enumeration of Alicyclobacillus acidoterrestris spores has been optimised. The results of the application of four types of growth media (BAT agar, YSG agar, K agar and SK agar) regarding the recovery and enumeration of A. acidoterrestris spores were compared. The influence of the type of applied growth medium, heat shock conditions, incubation temperature, incubation time, plating technique and the presence of apple juice in the sample on the accuracy of the detection and enumeration of A. acidoterrestris spores was investigated. Among the investigated media, YSG agar was the most sensitive medium, and its application resulted in the highest recovery of A. acidoterrestris spores, while K agar and BAT agar were the least suitable media. The effect of the heat shock time on the recovery of spores was negligible. When there was a low concentration of spores in a sample, the membrane filtration method was superior to the spread plating method. The obtained results show that heat shock carried out at 80°C for 10 min and plating samples in combination with membrane filtration on YSG agar, followed by incubation at 46°C for 3 days provided the optimal conditions for the detection and enumeration of A. acidoterrestris spores. Application of the presented method allows highly efficient, fast and sensitive identification and enumeration of A. acidoterrestris spores in food products. This methodology will be useful for the fruit juice industry for identifying products contaminated with A. acidoterrestris spores, and its practical application may prevent economic losses for manufacturers. Copyright © 2012 Elsevier B.V. All rights reserved.
Novel Approach on the Optimisation of Mid-Course Corrections Along Interplanetary Trajectories
NASA Astrophysics Data System (ADS)
Iorfida, Elisabetta; Palmer, Phil; Roberts, Mark
The primer vector theory, firstly proposed by Lawden, defines a set of necessary conditions to characterise whether an impulsive thrust trajectory is optimal with respect to propellant usage, within a two-body problem context. If the conditions are not satisfied, one or more potential intermediate impulses are performed along the transfer arc, in order to lower the overall cost. The method is based on the propagation of the state transition matrix and on the solution of a boundary value problem, which leads to a mathematical and computational complexity.In this paper, a different approach is introduced. It is based on a polar coordinates transformation of the primer vector which allows the decoupling between its in-plane and out-of-plane components. The out-of-plane component is solved analytically while for the in-plane ones a Hamiltonian approximation is made.The novel procedure reduces the mathematical complexity and the computational cost of Lawden's problem and gives also a different perspective about the optimisation of a transfer trajectory.
NASA Astrophysics Data System (ADS)
Briggs, J. P.; Pennycook, S. J.; Fergusson, J. R.; Jäykkä, J.; Shellard, E. P. S.
2016-04-01
We present a case study describing efforts to optimise and modernise "Modal", the simulation and analysis pipeline used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum (or three-point correlator) of the cosmic microwave background radiation. We focus on one particular element of the code: the projection of bispectra from the end of inflation to the spherical shell at decoupling, which defines the CMB we observe today. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular domain containing a sparse grid. We show that by employing separable methods this calculation can be reduced to a one-dimensional summation plus two integrations, reducing the overall dimensionality from four to three. The introduction of separable functions also solves the issue of the non-rectangular sparse grid. This separable method can become unstable in certain scenarios and so the slower non-separable integral must be calculated instead. We present a discussion of the optimisation of both approaches. We demonstrate significant speed-ups of ≈100×, arising from a combination of algorithmic improvements and architecture-aware optimisations targeted at improving thread and vectorisation behaviour. The resulting MPI/OpenMP hybrid code is capable of executing on clusters containing processors and/or coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3× and that running the same code across a combination of both microarchitectures improves performance-per-node by a factor of 3.38×. By making bispectrum calculations competitive with those for the power spectrum (or two-point correlator) we are now able to consider joint analysis for cosmological science exploitation of new data.
Statistical optimisation techniques in fatigue signal editing problem
NASA Astrophysics Data System (ADS)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.
2015-02-01
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.
Statistical optimisation techniques in fatigue signal editing problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less
Li, Jinyan; Fong, Simon; Sung, Yunsick; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K L
2016-01-01
An imbalanced dataset is defined as a training dataset that has imbalanced proportions of data in both interesting and uninteresting classes. Often in biomedical applications, samples from the stimulating class are rare in a population, such as medical anomalies, positive clinical tests, and particular diseases. Although the target samples in the primitive dataset are small in number, the induction of a classification model over such training data leads to poor prediction performance due to insufficient training from the minority class. In this paper, we use a novel class-balancing method named adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique (ASCB_DmSMOTE) to solve this imbalanced dataset problem, which is common in biomedical applications. The proposed method combines under-sampling and over-sampling into a swarm optimisation algorithm. It adaptively selects suitable parameters for the rebalancing algorithm to find the best solution. Compared with the other versions of the SMOTE algorithm, significant improvements, which include higher accuracy and credibility, are observed with ASCB_DmSMOTE. Our proposed method tactfully combines two rebalancing techniques together. It reasonably re-allocates the majority class in the details and dynamically optimises the two parameters of SMOTE to synthesise a reasonable scale of minority class for each clustered sub-imbalanced dataset. The proposed methods ultimately overcome other conventional methods and attains higher credibility with even greater accuracy of the classification model.
Evaluation of a High Throughput Starch Analysis Optimised for Wood
Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco
2014-01-01
Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863
De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W
2017-12-01
Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was 0.82. The overall performance of the presented automated pilot technique to stage lower third molar development on panoramic radiographs was similar to staging by human observers. It will be further optimised in future research, since it represents a necessary step to achieve a fully automated dental age estimation method, which to date is not available.
Rotational degree-of-freedom synthesis: An optimised finite difference method for non-exact data
NASA Astrophysics Data System (ADS)
Gibbons, T. J.; Öztürk, E.; Sims, N. D.
2018-01-01
Measuring the rotational dynamic behaviour of a structure is important for many areas of dynamics such as passive vibration control, acoustics, and model updating. Specialist and dedicated equipment is often needed, unless the rotational degree-of-freedom is synthesised based upon translational data. However, this involves numerically differentiating the translational mode shapes to approximate the rotational modes, for example using a finite difference algorithm. A key challenge with this approach is choosing the measurement spacing between the data points, an issue which has often been overlooked in the published literature. The present contribution will for the first time prove that the use of a finite difference approach can be unstable when using non-exact measured data and a small measurement spacing, for beam-like structures. Then, a generalised analytical error analysis is used to propose an optimised measurement spacing, which balances the numerical error of the finite difference equation with the propagation error from the perturbed data. The approach is demonstrated using both numerical and experimental investigations. It is shown that by obtaining a small number of test measurements it is possible to optimise the measurement accuracy, without any further assumptions on the boundary conditions of the structure.
Prediction of road traffic death rate using neural networks optimised by genetic algorithm.
Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari
2015-01-01
Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.
Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.
Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne
2017-01-01
Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.
In‐loop flow [11C]CO2 fixation and radiosynthesis of N,N′‐[11C]dibenzylurea
Downey, Joseph; Bongarzone, Salvatore; Hader, Stefan
2017-01-01
Cyclotron‐produced carbon‐11 is a highly valuable radionuclide for the production of positron emission tomography (PET) radiotracers. It is typically produced as relatively unreactive carbon‐11 carbon dioxide ([11C]CO2), which is most commonly converted into a more reactive precursor for synthesis of PET radiotracers. The development of [11C]CO2 fixation methods has more recently enabled the direct radiolabelling of a diverse array of structures directly from [11C]CO2, and the advantages afforded by the use of a loop‐based system used in 11C‐methylation and 11C‐carboxylation reactions inspired us to apply the [11C]CO2 fixation “in‐loop.” In this work, we developed and investigated a new ethylene tetrafluoroethylene (ETFE) loop‐based [11C]CO2 fixation method, enabling the fast and efficient, direct‐from‐cyclotron, in‐loop trapping of [11C]CO2 using mixed DBU/amine solutions. An optimised protocol was integrated into a proof‐of‐concept in‐loop flow radiosynthesis of N,N′‐[11C]dibenzylurea. This reaction exhibited an average 78% trapping efficiency and a crude radiochemical purity of 83% (determined by radio‐HPLC), giving an overall nonisolated radiochemical yield of 72% (decay‐corrected) within just 3 minutes from end of bombardment. This proof‐of‐concept reaction has demonstrated that efficient [11C]CO2 fixation can be achieved in a low‐volume (150 μL) ETFE loop and that this can be easily integrated into a rapid in‐loop flow radiosynthesis of carbon‐11–labelled products. This new in‐loop methodology will allow fast radiolabelling reactions to be performed using cheap/disposable ETFE tubing setup (ideal for good manufacturing practice production) thereby contributing to the widespread usage of [11C]CO2 trapping/fixation reactions for the production of PET radiotracers. PMID:28977686
In-loop flow [11 C]CO2 fixation and radiosynthesis of N,N'-[11 C]dibenzylurea.
Downey, Joseph; Bongarzone, Salvatore; Hader, Stefan; Gee, Antony D
2018-03-01
Cyclotron-produced carbon-11 is a highly valuable radionuclide for the production of positron emission tomography (PET) radiotracers. It is typically produced as relatively unreactive carbon-11 carbon dioxide ([ 11 C]CO 2 ), which is most commonly converted into a more reactive precursor for synthesis of PET radiotracers. The development of [ 11 C]CO 2 fixation methods has more recently enabled the direct radiolabelling of a diverse array of structures directly from [ 11 C]CO 2 , and the advantages afforded by the use of a loop-based system used in 11 C-methylation and 11 C-carboxylation reactions inspired us to apply the [ 11 C]CO 2 fixation "in-loop." In this work, we developed and investigated a new ethylene tetrafluoroethylene (ETFE) loop-based [ 11 C]CO 2 fixation method, enabling the fast and efficient, direct-from-cyclotron, in-loop trapping of [ 11 C]CO 2 using mixed DBU/amine solutions. An optimised protocol was integrated into a proof-of-concept in-loop flow radiosynthesis of N,N'-[ 11 C]dibenzylurea. This reaction exhibited an average 78% trapping efficiency and a crude radiochemical purity of 83% (determined by radio-HPLC), giving an overall nonisolated radiochemical yield of 72% (decay-corrected) within just 3 minutes from end of bombardment. This proof-of-concept reaction has demonstrated that efficient [ 11 C]CO 2 fixation can be achieved in a low-volume (150 μL) ETFE loop and that this can be easily integrated into a rapid in-loop flow radiosynthesis of carbon-11-labelled products. This new in-loop methodology will allow fast radiolabelling reactions to be performed using cheap/disposable ETFE tubing setup (ideal for good manufacturing practice production) thereby contributing to the widespread usage of [ 11 C]CO 2 trapping/fixation reactions for the production of PET radiotracers. © 2017 The Authors. Journal of Labelled Compounds and Radiopharmaceuticals Published by John Wiley & Sons, Ltd.
Consensus for multi-agent systems with time-varying input delays
NASA Astrophysics Data System (ADS)
Yuan, Chengzhi; Wu, Fen
2017-10-01
This paper addresses the consensus control problem for linear multi-agent systems subject to uniform time-varying input delays and external disturbance. A novel state-feedback consensus protocol is proposed under the integral quadratic constraint (IQC) framework, which utilises not only the relative state information from neighbouring agents but also the real-time information of delays by means of the dynamic IQC system states for feedback control. Based on this new consensus protocol, the associated IQC-based control synthesis conditions are established and fully characterised as linear matrix inequalities (LMIs), such that the consensus control solution with optimal ? disturbance attenuation performance can be synthesised efficiently via convex optimisation. A numerical example is used to demonstrate the proposed approach.
NASA Astrophysics Data System (ADS)
Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine
2018-07-01
This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Heck, V.
2014-09-01
Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.
NASA Astrophysics Data System (ADS)
Sheikholeslami, Ghazal; Griffiths, Jonathan; Dearden, Geoff; Edwardson, Stuart P.
Laser forming (LF) has been shown to be a viable alternative to form automotive grade advanced high strength steels (AHSS). Due to their high strength, heat sensitivity and low conventional formability show early fractures, larger springback, batch-to-batch inconsistency and high tool wear. In this paper, optimisation of the LF process parameters has been conducted to further understand the impact of a surface heat treatment on DP1000. A FE numerical simulation has been developed to analyse the dynamic thermo-mechanical effects. This has been verified against empirical data. The goal of the optimisation has been to develop a usable process window for the LF of AHSS within strict metallurgical constraints. Results indicate it is possible to LF this material, however a complex relationship has been found between the generation and maintenance of hardness values in the heated zone. A laser surface hardening effect has been observed that could be beneficial to the efficiency of the process.
Optimising the neutron environment of Radiation Portal Monitors: A computational study
NASA Astrophysics Data System (ADS)
Gilbert, Mark R.; Ghani, Zamir; McMillan, John E.; Packer, Lee W.
2015-09-01
Efficient and reliable detection of radiological or nuclear threats is a crucial part of national and international efforts to prevent terrorist activities. Radiation Portal Monitors (RPMs), which are deployed worldwide, are intended to interdict smuggled fissile material by detecting emissions of neutrons and gamma rays. However, considering the range and variety of threat sources, vehicular and shielding scenarios, and that only a small signature is present, it is important that the design of the RPMs allows these signatures to be accurately differentiated from the environmental background. Using Monte-Carlo neutron-transport simulations of a model 3He detector system we have conducted a parameter study to identify the optimum combination of detector shielding, moderation, and collimation that maximises the sensitivity of neutron-sensitive RPMs. These structures, which could be simply and cost-effectively added to existing RPMs, can improve the detector response by more than a factor of two relative to an unmodified, bare design. Furthermore, optimisation of the air gap surrounding the helium tubes also improves detector efficiency.
NASA Astrophysics Data System (ADS)
Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.
2015-01-01
Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds (VOC). Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. Thus a method was systematically characterised and improved to quantify carbonyl compounds. Quantification with the present method can be carried out for each carbonyl compound sampled in the aqueous phase regardless of their source. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). The main advantage of the improved method presented in this study is the low detection limit in the range of 0.01 and 0.17 μmol L-1 depending on carbonyl compounds. Furthermore best results were found for extraction with dichloromethane for 30 min followed by derivatisation with PFBHA for 24 h with 0.43 mg mL-1 PFBHA at a pH value of 3. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione.
Application of cellulose nanofibers to remove water-based flexographic inks from wastewaters.
Balea, Ana; Monte, M Concepción; de la Fuente, Elena; Negro, Carlos; Blanco, Ángeles
2017-02-01
Water-based or flexographic inks in paper and plastic industries are more environmentally favourable than organic solvent-based inks. However, their use also creates new challenges because they remain dissolved in water and alter the recycling process. Conventional deinking technologies such as flotation processes do not effectively remove them. Adsorption, coagulation/flocculation, biological and membrane processes are either expensive or have negative health impacts, making the development of alternative methods necessary. Cellulose nanofibers (CNF) are biodegradable, and their structural and mechanical properties are useful for wastewater treatment. TEMPO-oxidised CNF have been evaluated for the decolourisation of wastewaters that contained copper phthalocyanine blue, carbon black and diarlyide yellow pigments. CNF in combination with a cationic polyacrylamide (cPAM) has also been tested. Jar-test methodology was used to evaluate the efficiency of the different treatments and cationic/anionic demand, turbidity and ink concentration in waters were measured. Results show that dual-component system for ink removal has a high potential as an alternative bio-based adsorbent for the removal of water-based inks. In addition, experiments varying CNF and cPAM concentrations were performed to optimise the ink-removal process. Ink concentration reductions of 100%, 87.5% and 83.3% were achieved for copper phthalocyanine blue, carbon black and diarlyide yellow pigments, respectively. Flocculation studies carried out show the decolourisation mechanism during the dual-component treatment of wastewaters containing water-based inks.
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
On an efficient multilevel inverter assembly: structural savings and design optimisations
NASA Astrophysics Data System (ADS)
Choupan, Reza; Nazarpour, Daryoush; Golshannavaz, Sajjad
2018-01-01
This study puts forward an efficient unit cell to be taken in use in multilevel inverter assemblies. The proposed structure is in line with reductions in number of direct current (dc) voltage sources, insulated-gate bipolar transistors (IGBTs), gate driver circuits, installation area, and hence the implementation costs. Such structural savings do not sacrifice the technical performance of the proposed design wherein an increased number of output voltage levels is attained, interestingly. Targeting a techno-economic characteristic, the contemplated structure is included as the key unit of cascaded multilevel inverters. Such extensions require development of applicable design procedures. To this end, two efficient strategies are elaborated to determine the magnitudes of input dc voltage sources. As well, an optimisation process is developed to explore the optimal allocation of different parameters in overall performance of the proposed inverter. These parameters are investigated as the number of IGBTs, dc sources, diodes, and overall blocked voltage on switches. In the lights of these characteristics, a comprehensive analysis is established to compare the proposed design with the conventional and recently developed structures. Detailed simulation and experimental studies are conducted to assess the performance of the proposed design. The obtained results are discussed in depth.
Pender, Alexandra; Garcia-Murillas, Isaac; Rana, Sareena; Cutts, Rosalind J; Kelly, Gavin; Fenwick, Kerry; Kozarewa, Iwanka; Gonzalez de Castro, David; Bhosle, Jaishree; O'Brien, Mary; Turner, Nicholas C; Popat, Sanjay; Downward, Julian
2015-01-01
Droplet digital PCR (ddPCR) can be used to detect low frequency mutations in oncogene-driven lung cancer. The range of KRAS point mutations observed in NSCLC necessitates a multiplex approach to efficient mutation detection in circulating DNA. Here we report the design and optimisation of three discriminatory ddPCR multiplex assays investigating nine different KRAS mutations using PrimePCR™ ddPCR™ Mutation Assays and the Bio-Rad QX100 system. Together these mutations account for 95% of the nucleotide changes found in KRAS in human cancer. Multiplex reactions were optimised on genomic DNA extracted from KRAS mutant cell lines and tested on DNA extracted from fixed tumour tissue from a cohort of lung cancer patients without prior knowledge of the specific KRAS genotype. The multiplex ddPCR assays had a limit of detection of better than 1 mutant KRAS molecule in 2,000 wild-type KRAS molecules, which compared favourably with a limit of detection of 1 in 50 for next generation sequencing and 1 in 10 for Sanger sequencing. Multiplex ddPCR assays thus provide a highly efficient methodology to identify KRAS mutations in lung adenocarcinoma.
Pender, Alexandra; Garcia-Murillas, Isaac; Rana, Sareena; Cutts, Rosalind J.; Kelly, Gavin; Fenwick, Kerry; Kozarewa, Iwanka; Gonzalez de Castro, David; Bhosle, Jaishree; O’Brien, Mary; Turner, Nicholas C.; Popat, Sanjay; Downward, Julian
2015-01-01
Droplet digital PCR (ddPCR) can be used to detect low frequency mutations in oncogene-driven lung cancer. The range of KRAS point mutations observed in NSCLC necessitates a multiplex approach to efficient mutation detection in circulating DNA. Here we report the design and optimisation of three discriminatory ddPCR multiplex assays investigating nine different KRAS mutations using PrimePCR™ ddPCR™ Mutation Assays and the Bio-Rad QX100 system. Together these mutations account for 95% of the nucleotide changes found in KRAS in human cancer. Multiplex reactions were optimised on genomic DNA extracted from KRAS mutant cell lines and tested on DNA extracted from fixed tumour tissue from a cohort of lung cancer patients without prior knowledge of the specific KRAS genotype. The multiplex ddPCR assays had a limit of detection of better than 1 mutant KRAS molecule in 2,000 wild-type KRAS molecules, which compared favourably with a limit of detection of 1 in 50 for next generation sequencing and 1 in 10 for Sanger sequencing. Multiplex ddPCR assays thus provide a highly efficient methodology to identify KRAS mutations in lung adenocarcinoma. PMID:26413866
Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano
2017-01-01
The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.
A native IP satellite communications system
NASA Astrophysics Data System (ADS)
Koudelka, O.; Schmidt, M.; Ebert, J.; Schlemmer, H.; Kastner-Puschl, S.; Riedler, W.
2004-08-01
≪ In the framework of ESA's ARTES-5 program the Institute of Applied Systems Technology (Joanneum Research) in cooperation with the Department of Communications and Wave Propagation has developed a novel meshed satellite communications system which is optimised for Internet traffic and applications (L*IP—Local Network Interconnection via Satellite Systems Using the IP Protocol Suite). Both symmetrical and asymmetrical connections are supported. Bandwidth on demand and guaranteed quality of service are key features of the system. A novel multi-frequency TDMA access scheme utilises efficient methods of IP encapsulation. In contrast to other solutions it avoids legacy transport network techniques. While the DVB-RCS standard is based on ATM or MPEG transport cells, the solution of the L*IP system uses variable-length cells which reduces the overhead significantly. A flexible and programmable platform based on Linux machines was chosen to allow the easy implementation and adaptation to different standards. This offers the possibility to apply the system not only to satellite communications, but provides seamless integration with terrestrial fixed broadcast wireless access systems. The platform is also an ideal test-bed for a variety of interactive broadband communications systems. The paper describes the system architecture and the key features of the system.
Haggag, Yusuf A; Osman, Mohamed A; El-Gizawy, Sanaa A; Goda, Ahmed E; Shamloula, Maha M; Faheem, Ahmed M; McCarron, Paul A
2018-05-29
Biodegradable PLGA nanoparticles, loaded with 5-fluorouracil (5FU), were prepared using a double emulsion method and characterised in terms of mean diameter, zeta potential, entrapment efficiency and in vitro release. Poly (vinyl alcohol) was used to modify both internal and external aqueous phases and shown have a significant effect on nanoparticulate size, encapsulation efficiency and the initial burst release. Addition of poly (ethylene glycol) to the particle matrix, as part of the polymeric backbone, improved significantly the encapsulation efficiency. 5FU-loaded NPs were spherical in shape and negatively charged with a size range of 185-350 nm. Biological evaluation was performed in vivo using a solid Ehrlich carcinoma (SEC) murine model. An optimised 5FU-loaded formulation containing PEG as part of a block copolymer induced a pronounced reduction in tumour volume and tumour weight, together with an improved percentage tumour growth inhibition. Drug-loaded nanoparticles showed no significant toxicity or associated changes on liver and kidney function in tested animals, whereas increased alanine aminotransferase, aspartate aminotransferase and serum creatinine were observed in animals treated with free 5FU. Histopathological examination demonstrated enhanced cytotoxic action of 5FU-loaded nanoparticles when compared to the free drug. Based on these findings, it was concluded that nano-encapsulation of 5FU using PEGylated PLGA improved encapsulation and sustained in vitro release. This leads to increased anti-tumour efficacy against SEC, with a reduction in adverse effects. Published by Elsevier Masson SAS.
Design of a compact antenna with flared groundplane for a wearable breast hyperthermia system.
Curto, Sergio; Prakash, Punit
2015-01-01
Currently available microwave hyperthermia systems for breast cancer treatment do not conform to the intact breast and provide limited control of heating patterns, thereby hindering an effective treatment. A compact patch antenna with a flared groundplane that may be integrated within a wearable hyperthermia system for the treatment of the intact breast disease is proposed. A 3D simulation-based approach was employed to optimise the antenna design with the objective of maximising the hyperthermia treatment volume (41 °C iso-therm) while maintaining good impedance matching. The optimised antenna design was fabricated and experimentally evaluated with ex vivo tissue measurements. The optimised compact antenna yielded a -10 dB bandwidth of 90 MHz centred at 915 MHz, and was capable of creating hyperthermia treatment volumes up to 14.4 cm(3) (31 mm × 28 mm × 32 mm) with an input power of 15 W. Experimentally measured reflection coefficient and transient temperature profiles were in good agreement with simulated profiles. Variations of + 50% in blood perfusion yielded variations in the treatment volume up to 11.5%. When compared to an antenna with a similar patch element employing a conventional rectangular groundplane, the antenna with flared groundplane afforded 22.3% reduction in required power levels to reach the same temperature, and yielded 2.4 times larger treatment volumes. The proposed patch antenna with a flared groundplane may be integrated within a wearable applicator for hyperthermia treatment of intact breast targets and has the potential to improve efficiency, increase patient comfort, and ultimately clinical outcomes.
Evaluation of extraction methods for ochratoxin A detection in cocoa beans employing HPLC.
Mishra, Rupesh K; Catanante, Gaëlle; Hayat, Akhtar; Marty, Jean-Louis
2016-01-01
Cocoa is an important ingredient for the chocolate industry and for many food products. However, it is prone to contamination by ochratoxin A (OTA), which is highly toxic and potentially carcinogenic to humans. In this work, four different extraction methods were tested and compared based on their recoveries. The best protocol was established which involves an organic solvent-free extraction method for the detection of OTA in cocoa beans using 1% sodium hydrogen carbonate (NaHCO3) in water within 30 min. The extraction method is rapid (as compared with existing methods), simple, reliable and practical to perform without complex experimental set-ups. The cocoa samples were freshly extracted and cleaned-up using immunoaffinity column (IAC) for HPLC analysis using a fluorescence detector. Under the optimised condition, the limit of detection (LOD) and limit of quantification (LOQ) for OTA were 0.62 and 1.25 ng ml(-1) respectively in standard solutions. The method could successfully quantify OTA in naturally contaminated samples. Moreover, good recoveries of OTA were obtained up to 86.5% in artificially spiked cocoa samples, with a maximum relative standard deviation (RSD) of 2.7%. The proposed extraction method could determine OTA at the level 1.5 µg kg(-)(1), which surpassed the standards set by the European Union for cocoa (2 µg kg(-1)). In addition, an efficiency comparison of IAC and molecular imprinted polymer (MIP) column was also performed and evaluated.
Martino, Emanuela; Collina, Simona; Rossi, Daniela; Bazzoni, Deborah; Gaggeri, Raffaella; Bracco, Francesco; Azzolina, Ornella
2008-01-01
The extract of Crataegus monogyna shows sedative, hypotensive, vasodilator and cardio-tonic actions. Although several papers dealing with the extraction of metabolites from Crataegus have been published, the plant productivity in terms of bioactive compounds is not easily understandable as yet. To investigate the influence of the extraction mode on the yield of bioactive compounds from Crataegus monogyna Jacq. in order to evaluate plant productivity. Samples were prepared by extraction of powdered material obtained from top branches, flowers and leaves. Soxhlet extraction, maceration and ultrasound- and microwave-assisted extraction at different experimental conditions were investigated for the exhaustive extraction of hyperoside, vitexin and vitexin-2''-O-rhamnoside. The phytocomponents were identified and quantified by HPLC-UV/PAD, comparing HPLC retention times and UV spectra of individual peaks with those of the standards analysed under the same conditions. An easy-to-use HPLC isocratic method suitable for the quantification of hyperoside, vitexin and vitexin-2''-O-rhamnoside in raw plant extracts was developed. The optimised HPLC methodology was applied to evaluate different extraction procedures. The ultrasound and microwave-assisted extraction protocols showed higher extraction efficiency than the others. In particular, the optimised microwave protocol gave rise to the highest extraction efficiency with high reproducibility. A microwave protocol combined with isocratic HPLC analysis is proposed for the rapid screening of plant materials collected in different environmental conditions in order to evaluate the productivity of Crataegus monogyna Jacq. and to find out the best ecological conditions to cultivate hawthorn in Northern Italy.
NASA Astrophysics Data System (ADS)
Haworth, Annette; Mears, Christopher; Betts, John M.; Reynolds, Hayley M.; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A.
2016-01-01
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The ‘biological optimisation’ considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
NASA Astrophysics Data System (ADS)
Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.
2015-06-01
Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds. Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). With the present method quantification can be carried out for each carbonyl compound originating from fog, cloud and rain or sampled from the gas- and particle phase in water. Detection limits between 0.01 and 0.17 μmol L-1 were found, depending on carbonyl compounds. Furthermore, best results were found for the derivatisation with a PFBHA concentration of 0.43 mg mL-1 for 24 h followed by a subsequent extraction with dichloromethane for 30 min at pH = 1. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione after a reaction time of 5 h.
Molecular materials for high performance OPV devices (Conference Presentation)
NASA Astrophysics Data System (ADS)
Jones, David J.
2016-09-01
We recently reported the high performing molecular donor for OPV devices based on a benzodithiophene core, a terthiophene bridge and a rhodamine acceptor (BTR) [1]. In this work we optimized side-chain placement of a known chromophore by ensuring the thiophene hexyl side-chains are regioregular, which should allow the chromophore to lie flat. The unexpected outcome was a nematic liquid crystalline material with significantly improved performance (now 9.6% PCE), excellent charge transport properties, reduced geminate recombination rates and excellent performance with active layers up to 400nm. Three phase changes were indicated by DSC analysis with a melt to a crystalline domain at 175 oC, transition to a nematic liquid crystalline domain at 186 oC and an isotropic melt at 196 oC. In our desire to better understand the structure property relationships of this class of p-type organic semiconductor we have synthesized a series of analogues where the length of the chromophore has been altered through modification of the oligothiophene bridge to generate, the monothiophene (BMR), the bisthiophene (BBR), the known terthiophene (BTR), the quaterthiophene (BQR) and the pentathiophene (BPR). BMR, BBR and BPR have clean melting points while BQR, like BTR shows a complicated series of phase transitions. Device efficiencies after solvent vapour annealing are BMR (3.5%), BBR (6.0%), BTR (9.3%), BQR (9.4%), and BPR (8.7%) unoptimised. OPV devices with BTR in the active layer are not stable under thermal annealing, however the bridge extended BQR and BPR form thermally stable devices. We are currently optimising these devices, but initial results indicate PCEs >9% for thermally annealed devices containing BQR, while BPR devices have not yet been optimised and have PCEs > 8%. In order to develop the device performance we have included BQR in ternary devices with the commercially available PTB7-Th and we report device efficiencies of over 10.5%. We are currently optimising device assembly and annealing conditions and relating these back to key materials properties. I will discuss the development of these new materials, their materials properties, structural data, and optimised device performance. I will examination of chromophore length on the Nematic Liquid Crystalline properties and on materials development and performance resulting in materials with > 9% PCE in OPV. [1] Sun, K.; Xiao, Z.; Lu, S.; Zajaczkowski, W.; Pisula, W.; Hanssen, E.; White, J. M.; Williamson, R. M.; Subbiah, J.; Ouyang, J.; Holmes, A. B.; Wong, W. W.; Jones, D. J., Nat. Commun. 2015, 6, 6013. DOI: 10.1038/ncomms7013
Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.
Morado Piñeiro, Andrés; Moreda-Piñeiro, Jorge; Alonso-Rodríguez, Elia; López-Mahía, Purificación; Muniategui-Lorenzo, Soledad; Prada-Rodríguez, Darío
2013-02-15
Analytical methods for the determination of total arsenic and arsenic species (mainly As(III) and As(V)) in human scalp hair have been developed. Inductively coupled plasma-mass spectrometry (ICP-MS) and high performance liquid chromatography (HPLC) coupled to ICP-MS have been used for total arsenic and arsenic species determination, respectively. The proposed methods include a "green", fast, high efficient and automated species leaching procedure by pressurized hot water extraction (PHWE). The operating parameters for PHWE including modifier concentration, extraction temperature, static time, extraction steps, pressure, mean particle size, diatomaceous earth (DE) mass/sample mass ratio and flush volume were studied using design of experiments (Plackett-Burman design PBD). Optimum condition implies a modifier concentration (acetic acid) of 150 mM and powdered hair samples fully mixed with diatomaceous earth (DE) as a dispersing agent at a DE mass/sample mass ratio of 5. The extraction has been carried out at 100°C and at an extraction pressure of 1500 psi for 5 min in four extraction step. Under optimised conditions, limits of quantification of 7.0, 6.3 and 50.3 ng g(-1) for total As, As(III) and As(V), respectively were achieved. Repeatability of the overall procedure (4.4, 7.2 and 2.1% for total As, As(III) and As(V), respectively) was achieved. The analysis of GBW-07601 (human hair) certified reference material was used for validation. The optimised method has been finally applied to several human scalp hair samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Price, Brandee A; Bednarski, Brian K; You, Y Nancy; Manandhar, Meryna; Dean, E Michelle; Alawadi, Zeinab M; Bryce Speer, B; Gottumukkala, Vijaya; Weldon, Marla; Massey, Robert L; Wang, Xuemei; Qiao, Wei; Chang, George J
2017-01-01
Introduction Definitive treatment of localised colorectal cancer involves surgical resection of the primary tumour. Short-stay colectomies (eg, 23-hours) would have important implications for optimising the efficiency of inpatient care with reduced resource utilisation while improving the overall recovery experience with earlier return to normalcy. It could permit surgical treatment of colorectal cancer in a wider variety of settings, including hospital-based ambulatory surgery environments. While a few studies have shown that discharge within the first 24 hours after minimally invasive colectomy is possible, the safety, feasibility and patient acceptability of a protocol for short-stay colectomy for colorectal cancer have not previously been evaluated in a prospective randomised study. Moreover, given the potential for some patients to experience a delay in recovery of bowel function after colectomy, close outpatient monitoring may be necessary to ensure safe implementation. Methods and analysis In order to address this gap, we propose a prospective randomised trial of accelerated enhanced Recovery following Minimally Invasive colorectal cancer surgery (RecoverMI) that leverages the combination of minimally invasive surgery with enhanced recovery protocols and early coordinated outpatient remote televideo conferencing technology (TeleRecovery) to improve postoperative patien-provider communication, enhance postoperative treatment navigation and optimise postdischarge care. We hypothesise that RecoverMI can be safely incorporated into multidisciplinary practice to improve patient outcomes and reduce the overall 30-day duration of hospitalisation while preserving the quality of the patient experience. Ethics and dissemination RecoverMI has received institutional review board approval and funding from the American Society of Colorectal Surgeons (ASCRS; LPG103). Results from RecoverMI will be published in a peer-reviewed publication and be used to inform a multisite trial. Trial registration number NCT02613728; Pre-results. PMID:28729319
Fox-7 for Insensitive Boosters
2010-08-01
cavitation , and therefore nucleation, to occur at each frequency. As well as producing ultrasound at different frequencies, the method of delivery of...processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology to improve booster formulations, and results from these...7 booster formulations. Also included are particle processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
Moret, Sabrina; Scolaro, Marianna; Barp, Laura; Purcaro, Giorgia; Conte, Lanfranco S
2016-04-01
A high throughput, high-sensitivity procedure, involving simultaneous microwave-assisted extraction (MAS) and unsaponifiable extraction, followed by on-line liquid chromatography (LC)-gas chromatography (GC), has been optimised for rapid and efficient extraction and analytical determination of mineral oil saturated hydrocarbons (MOSH) and mineral oil aromatic hydrocarbons (MOAH) in cereal-based products of different composition. MAS has the advantage of eliminating fat before LC-GC analysis, allowing an increase in the amount of sample extract injected, and hence in sensitivity. The proposed method gave practically quantitative recoveries and good repeatability. Among the different cereal-based products analysed (dry semolina and egg pasta, bread, biscuits, and cakes), egg pasta packed in direct contact with recycled paperboard had on average the highest total MOSH level (15.9 mg kg(-1)), followed by cakes (10.4 mg kg(-1)) and bread (7.5 mg kg(-1)). About 50% of the pasta and bread samples and 20% of the biscuits and cake samples had detectable MOAH amounts. The highest concentrations were found in an egg pasta in direct contact with recycled paperboard (3.6 mg kg(-1)) and in a milk bread (3.6 mg kg(-1)). Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of isothiazolinones in environmental waters by gas chromatography-mass spectrometry.
Rafoth, Astrid; Gabriel, Sabine; Sacher, Frank; Brauch, Heinz-Jürgen
2007-09-14
This paper describes an analytical method for the determination of five biocides of isothiazolinone type (2-methyl-3-isothiazolinone (MI), 5-chloro-2-methyl-3-isothiazolinone (CMI), 1,2-benzisothiazolinone (BIT), 2-octyl-3-isothiazolinone (OI), 4,5-dichloro-2-octyl-3-isothiazolinone (DCOI)) in environmental waters. The method is based on pre-concentration of the analytes by solid-phase extraction onto a mixture of a polymeric material and RP-C18 material and subsequent determination by gas chromatography-mass spectrometry (GC-MS). One of the target compounds (BIT) is derivatised with diazomethane after pre-concentration to improve its chromatographic performance. The method was optimised with respect to pre-concentration conditions (liquid-liquid extraction versus solid-phase extraction, solid-phase material, elution solvent and volume) and extensively validated. Applying the method to surface waters, groundwaters, and drinking waters, limits of detection between 0.01 and 0.1 microg/l could be achieved and the repeatability was below 10% for all compounds except for MI. Additional investigations showed that the stability of the isothiazolinones in environmental waters is limited and sample storage at 4 degrees C is mandatory to preserve the target biocides. First investigations of influents and effluents of a wastewater treatment plant showed that conventional wastewater treatment exhibits a high efficiency for removal of the isothiazolinones. In river waters, the target isothiazolinones could not be detected.
Statistical alignment: computational properties, homology testing and goodness-of-fit.
Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G
2000-09-08
The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.
Vincent, Delphine; Elkins, Aaron; Condina, Mark R; Ezernieks, Vilnis; Rochfort, Simone
2016-01-01
Cow's milk is an important source of proteins in human nutrition. On average, cow's milk contains 3.5% protein. The most abundant proteins in bovine milk are caseins and some of the whey proteins, namely beta-lactoglobulin, alpha-lactalbumin, and serum albumin. A number of allelic variants and post-translationally modified forms of these proteins have been identified. Their occurrence varies with breed, individuality, stage of lactation, and health and nutritional status of the animal. It is therefore essential to have reliable methods of detection and quantitation of these proteins. Traditionally, major milk proteins are quantified using liquid chromatography (LC) and ultra violet detection method. However, as these protein variants co-elute to some degree, another dimension of separation is beneficial to accurately measure their amounts. Mass spectrometry (MS) offers such a tool. In this study, we tested several RP-HPLC and MS parameters to optimise the analysis of intact bovine proteins from milk. From our tests, we developed an optimum method that includes a 20-28-40% phase B gradient with 0.02% TFA in both mobile phases, at 0.2 mL/min flow rate, using 75°C for the C8 column temperature, scanning every 3 sec over a 600-3000 m/z window. The optimisations were performed using external standards commercially purchased for which ionisation efficiency, linearity of calibration, LOD, LOQ, sensitivity, selectivity, precision, reproducibility, and mass accuracy were demonstrated. From the MS analysis, we can use extracted ion chromatograms (EICs) of specific ion series of known proteins and integrate peaks at defined retention time (RT) window for quantitation purposes. This optimum quantitative method was successfully applied to two bulk milk samples from different breeds, Holstein-Friesian and Jersey, to assess differences in protein variant levels.
Aberl, A; Coelhan, M
2013-01-01
Sulfites are routinely added as preservatives and antioxidants in wine production. By law, the total sulfur dioxide content in wine is restricted and therefore must be monitored. Currently, the method of choice for determining the total content of sulfur dioxide in wine is the optimised Monier-Williams method, which is time consuming and laborious. The headspace gas chromatographic method described in this study offers a fast and reliable alternative method for the detection and quantification of the sulfur dioxide content in wine. The analysis was performed using an automatic headspace injection sampler, coupled with a gas chromatograph and an electron capture detector. The method is based on the formation of gaseous sulfur dioxide subsequent to acidification and heating of the sample. In addition to free sulfur dioxide, reversibly bound sulfur dioxide in carbonyl compounds, such as acetaldehyde, was also measured with this method. A total of 20 wine samples produced using diverse grape varieties and vintages of varied provenance were analysed using the new method. For reference and comparison purposes, 10 of the results obtained by the proposed method were compared with those acquired by the optimised Monier-Williams method. Overall, the results from the headspace analysis showed good correlation (R = 0.9985) when compared with the conventional method. This new method requires minimal sample preparation and is simple to perform, and the analysis can also be completed within a short period of time.
Convex relaxations for gas expansion planning
Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...
2016-01-01
Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Treatment planning optimisation in proton therapy
McGowan, S E; Burnet, N G; Lomax, A J
2013-01-01
ABSTRACT. The goal of radiotherapy is to achieve uniform target coverage while sparing normal tissue. In proton therapy, the same sources of geometric uncertainty are present as in conventional radiotherapy. However, an important and fundamental difference in proton therapy is that protons have a finite range, highly dependent on the electron density of the material they are traversing, resulting in a steep dose gradient at the distal edge of the Bragg peak. Therefore, an accurate knowledge of the sources and magnitudes of the uncertainties affecting the proton range is essential for producing plans which are robust to these uncertainties. This review describes the current knowledge of the geometric uncertainties and discusses their impact on proton dose plans. The need for patient-specific validation is essential and in cases of complex intensity-modulated proton therapy plans the use of a planning target volume (PTV) may fail to ensure coverage of the target. In cases where a PTV cannot be used, other methods of quantifying plan quality have been investigated. A promising option is to incorporate uncertainties directly into the optimisation algorithm. A further development is the inclusion of robustness into a multicriteria optimisation framework, allowing a multi-objective Pareto optimisation function to balance robustness and conformity. The question remains as to whether adaptive therapy can become an integral part of a proton therapy, to allow re-optimisation during the course of a patient's treatment. The challenge of ensuring that plans are robust to range uncertainties in proton therapy remains, although these methods can provide practical solutions. PMID:23255545
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
NASA Astrophysics Data System (ADS)
Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.
2017-12-01
It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required to reduce an error metric based on the Hessian of the field. This allows the local pressure drawdown to be captured without user¬ driven modification of the mesh. We demonstrate that the method has wide application in reservoir ¬scale models of geothermal fields, and regional models of groundwater resources.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Martens, Leon; Goode, Grahame; Wold, Johan F. H.; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas
2014-01-01
Aims To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Methods Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. Results With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Conclusions Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield. PMID:24927475
Tredwin, Christopher J; Young, Anne M; Georgiou, George; Shin, Song-Hee; Kim, Hae-Won; Knowles, Jonathan C
2013-02-01
Currently, most titanium implant coatings are made using hydroxyapatite and a plasma spraying technique. There are however limitations associated with plasma spraying processes including poor adherence, high porosity and cost. An alternative method utilising the sol-gel technique offers many potential advantages but is currently lacking research data for this application. It was the objective of this study to characterise and optimise the production of Hydroxyapatite (HA), fluorhydroxyapatite (FHA) and fluorapatite (FA) using a sol-gel technique and assess the rheological properties of these materials. HA, FHA and FA were synthesised by a sol-gel method. Calcium nitrate and triethylphosphite were used as precursors under an ethanol-water based solution. Different amounts of ammonium fluoride (NH4F) were incorporated for the preparation of the sol-gel derived FHA and FA. Optimisation of the chemistry and subsequent characterisation of the sol-gel derived materials was carried out using X-ray Diffraction (XRD) and Differential Thermal Analysis (DTA). Rheology of the sol-gels was investigated using a viscometer and contact angle measurement. A protocol was established that allowed synthesis of HA, FHA and FA that were at least 99% phase pure. The more fluoride incorporated into the apatite structure; the lower the crystallisation temperature, the smaller the unit cell size (changes in the a-axis), the higher the viscosity and contact angle of the sol-gel derived apatite. A technique has been developed for the production of HA, FHA and FA by the sol-gel technique. Increasing fluoride substitution in the apatite structure alters the potential coating properties. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool
NASA Astrophysics Data System (ADS)
Helle, K. B.; Müller, T. O.; Astrup, P.; Dyve, J. E.
2014-05-01
Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64 source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results. The DOT runs on a server and can be accessed via common web browsers; it can also be installed locally.
NASA Astrophysics Data System (ADS)
Yuan, Wong Wei; Natashah Norizan, Mohd; Salwani Mohamad, Ili; Jamalullail, Nurnaeimah; Hidayah Saad, Nor
2017-11-01
Solar cell is expanding as green renewable alternative to conventional fossil fuel electricity generation, but compared to other land-used electrical generators, it is a comparative beginner. Many applications covered by solar cells starting from low power mobile devices, terrestrial, satellites and many more. To date, the highest efficiency solar cell is given by GaAs based multilayer solar cell. However, this material is very expensive in fabrication and material costs compared to silicon which is cheaper due to the abundance of supply. Thus, this research is devoted to develop multilayer solar cell by combining two different layers of P-I-N structures with silicon carbide and silicon germanium. This research focused on optimising the intrinsic layer thickness, p-doped layer thickness and concentration, n-doped layer thickness and concentration in achieving the highest efficiency. As a result, both single layer a-SiC and a-SiGe showed positive efficiency improvement with the record of 27.19% and 9.07% respectively via parametric optimization. The optimized parameters is then applied on both SiC and SiGe P-I-N layers and resulted the convincing efficiency of 33.80%.
Jamshidi, N; Rostami, M; Najarian, S; Menhaj, M B; Saadatnia, M; Firooz, S
2009-04-01
This paper deals with the dynamic modelling of human walking. The main focus of this research was to optimise the function of the orthosis in patients with neuropathic feet, based on the kinematics data from different categories of neuropathic patients. The patient's body on the sagittal plane was modelled for calculating the torques generated in joints. The kinematics data required for mathematical modelling of the patients were obtained from the films of patients captured by high speed camera, and then the films were analysed through a motion analysis software. An inverse dynamic model was used for estimating the spring coefficient. In our dynamic model, the role of muscles was substituted by adding a spring-damper between the shank and ankle that could compensate for their weakness by designing ankle-foot orthoses based on the kinematics data obtained from the patients. The torque generated in the ankle was varied by changing the spring constant. Therefore, it was possible to decrease the torque generated in muscles which could lead to the design of more comfortable and efficient orthoses. In this research, unlike previous research activities, instead of studying the abnormal gait or modelling the ankle-foot orthosis separately, the function of the ankle-foot orthosis on the abnormal gait has been quantitatively improved through a correction of the torque.
A simulation-optimization model for effective water resources management in the coastal zone
NASA Astrophysics Data System (ADS)
Spanoudaki, Katerina; Kampanis, Nikolaos
2015-04-01
Coastal areas are the most densely-populated areas in the world. Consequently water demand is high, posing great pressure on fresh water resources. Climatic change and its direct impacts on meteorological variables (e.g. precipitation) and indirect impact on sea level rise, as well as anthropogenic pressures (e.g. groundwater abstraction), are strong drivers causing groundwater salinisation and subsequently affecting coastal wetlands salinity with adverse effects on the corresponding ecosystems. Coastal zones are a difficult hydrologic environment to represent with a mathematical model due to the large number of contributing hydrologic processes and variable-density flow conditions. Simulation of sea level rise and tidal effects on aquifer salinisation and accurate prediction of interactions between coastal waters, groundwater and neighbouring wetlands requires the use of integrated surface water-groundwater mathematical models. In the past few decades several computer codes have been developed to simulate coupled surface and groundwater flow. However, most integrated surface water-groundwater models are based on the assumption of constant fluid density and therefore their applicability to coastal regions is questionable. Thus, most of the existing codes are not well-suited to represent surface water-groundwater interactions in coastal areas. To this end, the 3D integrated surface water-groundwater model IRENE (Spanoudaki et al., 2009; Spanoudaki, 2010) has been modified in order to simulate surface water-groundwater flow and salinity interactions in the coastal zone. IRENE, in its original form, couples the 3D shallow water equations to the equations describing 3D saturated groundwater flow of constant density. A semi-implicit finite difference scheme is used to solve the surface water flow equations, while a fully implicit finite difference scheme is used for the groundwater equations. Pollution interactions are simulated by coupling the advection-diffusion equation describing the fate and transport of contaminants introduced in a 3D turbulent flow field to the partial differential equation describing the fate and transport of contaminants in 3D transient groundwater flow systems. The model has been further developed to include the effects of density variations on surface water and groundwater flow, while the already built-in solute transport capabilities are used to simulate salinity interactions. The refined model is based on the finite volume method using a cell-centred structured grid, providing thus flexibility and accuracy in simulating irregular boundary geometries. For addressing water resources management problems, simulation models are usually externally coupled with optimisation-based management models. However this usually requires a very large number of iterations between the optimisation and simulation models in order to obtain the optimal management solution. As an alternative approach, for improved computational efficiency, an Artificial Neural Network (ANN) is trained as an approximate simulator of IRENE. The trained ANN is then linked to a Genetic Algorithm (GA) based optimisation model for managing salinisation problems in the coastal zone. The linked simulation-optimisation model is applied to a hypothetical study area for performance evaluation. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). References Spanoudaki, K., Stamou, A.I. and Nanou-Giannarou, A. (2009). Development and verification of a 3-D integrated surface water-groundwater model. Journal of Hydrology, 375 (3-4), 410-427. Spanoudaki, K. (2010). Integrated numerical modelling of surface water groundwater systems (in Greek). Ph.D. Thesis, National Technical University of Athens, Greece.
Efficient 'Foton' electric-discharge KrCl laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panchenko, Aleksei N; Tarasenko, Viktor F
The design of the 'Foton' electric-discharge laser, optimised for operation on the basis of KrCl* molecules, and its energy parameters were investigated. At {lambda} = 222 nm the radiation energy was up to 250 mJ per pulse. The specific output radiation energy was 2.5 J litre{sup -1} and the laser efficiency was in excess of 0.8%. The possibility of further improvement of the characteristics of electric-discharge KrCl lasers are discussed. (lasers)
Wiener-Hammerstein system identification - an evolutionary approach
NASA Astrophysics Data System (ADS)
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
Cui, Meiyu; Qiu, Jinxue; Li, Zhenghua; He, Miao; Jin, Mingshi; Kim, Jiman; Quinto, Maurizio; Li, Donghao
2015-01-01
In this study, a stainless steel wire/ionic liquid-solid phase microextraction technique was developed for the direct extraction of APs from water samples. Some parameters were optimised, such as selection of the substrate and ILs, extraction time, extraction temperature, stirring rate and sample pH, etc. The experimental data demonstrated that the etched stainless steel wire was a suitable substrate for IL-coated SPME. The coating was prepared by directly depositing the ILs onto the surface of the etched stainless steel wire, which exhibited a porous structure and a high surface area. The [C8MIM][PF6] IL exhibited maximum efficiency with an extraction time of 30 min, and the aqueous sample was maintained at 40 °C and adjusted to pH 2 under stirring conditions. The enrichment factor of the IL coating for the four APs ranged from 1382 to 4779, the detection limits (LOD, S/N=3) of the four APs ranged from 0.01 to 0.04 ng mL(-1) and the RSD values for purified water spiked with APs ranged from 4.0 to 11.8% (n=3). The calibration graphs were linear in the concentration range from 0.5 to 200 ng mL(-1) (R(2)>0.9569). The optimised method was successfully applied for the analysis of real water samples, and the method was suitable for the extraction of APs from water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Shape and energy consistent pseudopotentials for correlated electron systems
Needs, R. J.
2017-01-01
A method is developed for generating pseudopotentials for use in correlated-electron calculations. The paradigms of shape and energy consistency are combined and defined in terms of correlated-electron wave-functions. The resulting energy consistent correlated electron pseudopotentials (eCEPPs) are constructed for H, Li–F, Sc–Fe, and Cu. Their accuracy is quantified by comparing the relaxed molecular geometries and dissociation energies which they provide with all electron results, with all quantities evaluated using coupled cluster singles, doubles, and triples calculations. Errors inherent in the pseudopotentials are also compared with those arising from a number of approximations commonly used with pseudopotentials. The eCEPPs provide a significant improvement in optimised geometries and dissociation energies for small molecules, with errors for the latter being an order-of-magnitude smaller than for Hartree-Fock-based pseudopotentials available in the literature. Gaussian basis sets are optimised for use with these pseudopotentials. PMID:28571391
Restivo, Annalaura; Degano, Ilaria; Ribechini, Erika; Colombini, Maria Perla
2014-01-01
A method for the HPLC-MS/MS analysis of phenols, including phenolic acids and naphtoquinones, using an amide-embedded phase column was developed and compared to the literature methods based on classical C18 stationary phase columns. RP-Amide is a recently developed polar embedded stationary phase, whose wetting properties mean that up to 100% water can be used as an eluent. The increased retention and selectivity for polar compounds and the possibility of working in 100% water conditions make this column particularly interesting for the HPLC analysis of phenolic acids and derivatives. In this study, the chromatographic separation was optimised on an HPLC-DAD, and was used to separate 13 standard phenolic acids and derivatives. The method was validated on an HPLC-ESI-Q-ToF. The acquisition was performed in negative polarity and MS/MS target mode. Ionisation conditions and acquisition parameters for the Q-ToF detector were investigated by working on collision energies and fragmentor potentials. The performance of the method was fully evaluated on standards. Moreover, several raw materials containing phenols were analysed: walnut, gall, wine, malbec grape, French oak, red henna and propolis. Our method allowed us to characterize the phenolic composition in a wide range of matrices and to highlight possible matrix effects. PMID:24551158
TACD: a transportable ant colony discrimination model for corporate bankruptcy prediction
NASA Astrophysics Data System (ADS)
Lalbakhsh, Pooia; Chen, Yi-Ping Phoebe
2017-05-01
This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy.
Kell, Douglas B
2012-01-01
A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a ‘landscape’ representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems ‘hard’, but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the ‘best’ experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. PMID:22252984
Kell, Douglas B
2012-03-01
A considerable number of areas of bioscience, including gene and drug discovery, metabolic engineering for the biotechnological improvement of organisms, and the processes of natural and directed evolution, are best viewed in terms of a 'landscape' representing a large search space of possible solutions or experiments populated by a considerably smaller number of actual solutions that then emerge. This is what makes these problems 'hard', but as such these are to be seen as combinatorial optimisation problems that are best attacked by heuristic methods known from that field. Such landscapes, which may also represent or include multiple objectives, are effectively modelled in silico, with modern active learning algorithms such as those based on Darwinian evolution providing guidance, using existing knowledge, as to what is the 'best' experiment to do next. An awareness, and the application, of these methods can thereby enhance the scientific discovery process considerably. This analysis fits comfortably with an emerging epistemology that sees scientific reasoning, the search for solutions, and scientific discovery as Bayesian processes. Copyright © 2012 WILEY Periodicals, Inc.
Integrating professionalism teaching into undergraduate medical education in the UK setting.
Goldie, John
2008-06-01
This paper examines how professionalism teaching might be integrated into undergraduate medical education in the United Kingdom setting. It advocates adopting an outcome-based approach to curriculum planning, using the Scottish Deans' Medical Curriculum Group's (SDMCG) outcomes as a starting point. In discussing the curricular content, potential learning methods and strategies, theoretical considerations are explored. Student selection, assessment and strategies for optimising the educational environment are also considered.
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
NASA Astrophysics Data System (ADS)
Antony, R.; Moliton, A.; Ratier, B.
1998-06-01
Light emitting diode based on the structure ITO/Alq3/Ca-Al lead to enhanced quantum efficiency when the Alq3 active layer is obtained by IBAD (Ion Beam Assisted Deposition): with Iodine ions, the optimization (quantum efficiency multiplied by a factor10) is obtained for an ion energy equal to 100eV. La réalisation de diodes électroluminescentes basées sur la structure ITO/Alq3/Ca-Al conduit à des performances améliorées lorsque le dépôt de la couche active Alq3 est effectué avec l'assistance d'un faisceau d'ions; l'optimisation (rendement quantique interne accru d'un ordre de grandeur) correspond à des ions Iode d'énergie 100eV.
Pure random search for ambient sensor distribution optimisation in a smart home environment.
Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming
2011-01-01
Smart homes are living spaces facilitated with technology to allow individuals to remain in their own homes for longer, rather than be institutionalised. Sensors are the fundamental physical layer with any smart home, as the data they generate is used to inform decision support systems, facilitating appropriate actuator actions. Positioning of sensors is therefore a fundamental characteristic of a smart home. Contemporary smart home sensor distribution is aligned to either a) a total coverage approach; b) a human assessment approach. These methods for sensor arrangement are not data driven strategies, are unempirical and frequently irrational. This Study hypothesised that sensor deployment directed by an optimisation method that utilises inhabitants' spatial frequency data as the search space, would produce more optimal sensor distributions vs. the current method of sensor deployment by engineers. Seven human engineers were tasked to create sensor distributions based on perceived utility for 9 deployment scenarios. A Pure Random Search (PRS) algorithm was then tasked to create matched sensor distributions. The PRS method produced superior distributions in 98.4% of test cases (n=64) against human engineer instructed deployments when the engineers had no access to the spatial frequency data, and in 92.0% of test cases (n=64) when engineers had full access to these data. These results thus confirmed the hypothesis.
Silva, C L; Passos, M; Câmara, J S
2011-01-01
Background: Non-invasive diagnostic strategies aimed at identifying biomarkers of cancer are of great interest for early cancer detection. Urine is potentially a rich source of volatile organic metabolites (VOMs) that can be used as potential cancer biomarkers. Our aim was to develop a generally reliable, rapid, sensitive, and robust analytical method for screening large numbers of urine samples, resulting in a broad spectrum of native VOMs, as a tool to evaluate the potential of these metabolites in the early diagnosis of cancer. Methods: To investigate urinary volatile metabolites as potential cancer biomarkers, urine samples from 33 cancer patients (oncological group: 14 leukaemia, 12 colorectal and 7 lymphoma) and 21 healthy (control group, cancer-free) individuals were qualitatively and quantitatively analysed. Dynamic solid-phase microextraction in headspace mode (dHS-SPME) using a carboxen-polydimethylsiloxane (CAR/PDMS) sorbent in combination with GC-qMS-based metabolomics was applied to isolate and identify the volatile metabolites. This method provides a potential non-invasive method for early cancer diagnosis as a first approach. To fulfil this objective, three important dHS-SPME experimental parameters that influence extraction efficiency (fibre coating, extraction time and temperature of sampling) were optimised using a univariate optimisation design. The highest extraction efficiency was obtained when sampling was performed at 50°C for 60 min using samples with high ionic strengths (17% sodium chloride, w v−1) and under agitation. Results: A total of 82 volatile metabolites belonging to distinct chemical classes were identified in the control and oncological groups. Benzene derivatives, terpenoids and phenols were the most common classes for the oncological group, whereas ketones and sulphur compounds were the main classes that were isolated from the urine headspace of healthy subjects. The results demonstrate that compound concentrations were dramatically different between cancer patients and healthy volunteers. The positive rates of 16 patients among the 82 identified were found to be statistically different (P<0.05). A significant increase in the peak area of 2-methyl-3-phenyl-2-propenal, p-cymene, anisole, 4-methyl-phenol and 1,2-dihydro-1,1,6-trimethyl-naphthalene in cancer patients was observed. On average, statistically significant lower abundances of dimethyl disulphide were found in cancer patients. Conclusions: Gas chromatographic peak areas were submitted to multivariate analysis (principal component analysis and supervised linear discriminant analysis) to visualise clusters within cases and to detect the volatile metabolites that are able to differentiate cancer patients from healthy individuals. Very good discrimination within cancer groups and between cancer and control groups was achieved. PMID:22085842
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
Jentzer, Jean-Baptiste; Alignan, Marion; Vaca-Garcia, Carlos; Rigal, Luc; Vilarem, Gérard
2015-01-01
Following the approval of steviol glycosides as a food additive in Europe in December 2011, large-scale stevia cultivation will have to be developed within the EU. Thus there is a need to increase the efficiency of stevia evaluation through germplasm enhancement and agronomic improvement programs. To address the need for faster and reproducible sample throughput, conditions for automated extraction of dried stevia leaves using Accelerated Solvent Extraction were optimised. A response surface methodology was used to investigate the influence of three factors: extraction temperature, static time and cycle number on the stevioside and rebaudioside A extraction yields. The model showed that all the factors had an individual influence on the yield. Optimum extraction conditions were set at 100 °C, 4 min and 1 cycle, which yielded 91.8% ± 3.4% of total extractable steviol glycosides analysed. An additional optimisation was achieved by reducing the grind size of the leaves giving a final yield of 100.8% ± 3.3%. Copyright © 2014 Elsevier Ltd. All rights reserved.
A review on simple assembly line balancing type-e problem
NASA Astrophysics Data System (ADS)
Jusop, M.; Rashid, M. F. F. Ab
2015-12-01
Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.
Path integration mediated systematic search: a Bayesian model.
Vickerstaff, Robert J; Merkle, Tobias
2012-08-21
The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sustainable Mining Land Use for Lignite Based Energy Projects
NASA Astrophysics Data System (ADS)
Dudek, Michal; Krysa, Zbigniew
2017-12-01
This research aims to discuss complex lignite based energy projects economic viability and its impact on sustainable land use with respect to project risk and uncertainty, economics, optimisation (e.g. Lerchs and Grossmann) and importance of lignite as fuel that may be expressed in situ as deposit of energy. Sensitivity analysis and simulation consist of estimated variable land acquisition costs, geostatistics, 3D deposit block modelling, electricity price considered as project product price, power station efficiency and power station lignite processing unit cost, CO2 allowance costs, mining unit cost and also lignite availability treated as lignite reserves kriging estimation error. Investigated parameters have nonlinear influence on results so that economically viable amount of lignite in optimal pit varies having also nonlinear impact on land area required for mining operation.