Sample records for computationally efficient optimisation-based

  1. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.

  2. Multiobjective optimisation of bogie suspension to boost speed on curves

    NASA Astrophysics Data System (ADS)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  3. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  4. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  5. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  6. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  7. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  8. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  9. Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.

    PubMed

    Garnavi, Rahil; Aldeen, Mohammad; Bailey, James

    2012-11-01

    This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.

  10. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  11. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions

    NASA Astrophysics Data System (ADS)

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-03-01

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.

  12. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.

    PubMed

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-03-23

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.

  13. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions

    PubMed Central

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-01-01

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585

  14. Crystal structure optimisation using an auxiliary equation of state

    NASA Astrophysics Data System (ADS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  15. Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line

    NASA Astrophysics Data System (ADS)

    Timings, Julian P.; Cole, David J.

    2012-06-01

    A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.

  16. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  17. Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Wang, Chengen

    2012-08-01

    Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.

  18. Robustness analysis of bogie suspension components Pareto optimised values

    NASA Astrophysics Data System (ADS)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  19. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  20. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  1. The multiple roles of computational chemistry in fragment-based drug design

    NASA Astrophysics Data System (ADS)

    Law, Richard; Barker, Oliver; Barker, John J.; Hesterkamp, Thomas; Godemann, Robert; Andersen, Ole; Fryatt, Tara; Courtney, Steve; Hallett, Dave; Whittaker, Mark

    2009-08-01

    Fragment-based drug discovery (FBDD) represents a change in strategy from the screening of molecules with higher molecular weights and physical properties more akin to fully drug-like compounds, to the screening of smaller, less complex molecules. This is because it has been recognised that fragment hit molecules can be efficiently grown and optimised into leads, particularly after the binding mode to the target protein has been first determined by 3D structural elucidation, e.g. by NMR or X-ray crystallography. Several studies have shown that medicinal chemistry optimisation of an already drug-like hit or lead compound can result in a final compound with too high molecular weight and lipophilicity. The evolution of a lower molecular weight fragment hit therefore represents an attractive alternative approach to optimisation as it allows better control of compound properties. Computational chemistry can play an important role both prior to a fragment screen, in producing a target focussed fragment library, and post-screening in the evolution of a drug-like molecule from a fragment hit, both with and without the available fragment-target co-complex structure. We will review many of the current developments in the area and illustrate with some recent examples from successful FBDD discovery projects that we have conducted.

  2. Design of a prototype flow microreactor for synthetic biology in vitro.

    PubMed

    Boehm, Christian R; Freemont, Paul S; Ces, Oscar

    2013-09-07

    As a reference platform for in vitro synthetic biology, we have developed a prototype flow microreactor for enzymatic biosynthesis. We report the design, implementation, and computer-aided optimisation of a three-step model pathway within a microfluidic reactor. A packed bed format was shown to be optimal for enzyme compartmentalisation after experimental evaluation of several approaches. The specific substrate conversion efficiency could significantly be improved by an optimised parameter set obtained by computational modelling. Our microreactor design provides a platform to explore new in vitro synthetic biology solutions for industrial biosynthesis.

  3. Evolving aerodynamic airfoils for wind turbines through a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hernández, J. J.; Gómez, E.; Grageda, J. I.; Couder, C.; Solís, A.; Hanotel, C. L.; Ledesma, JI

    2017-01-01

    Nowadays, genetic algorithms stand out for airfoil optimisation, due to the virtues of mutation and crossing-over techniques. In this work we propose a genetic algorithm with arithmetic crossover rules. The optimisation criteria are taken to be the maximisation of both aerodynamic efficiency and lift coefficient, while minimising drag coefficient. Such algorithm shows greatly improvements in computational costs, as well as a high performance by obtaining optimised airfoils for Mexico City's specific wind conditions from generic wind turbines designed for higher Reynolds numbers, in few iterations.

  4. Coil optimisation for transcranial magnetic stimulation in realistic head geometry.

    PubMed

    Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J

    Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A novel swarm intelligence algorithm for finding DNA motifs.

    PubMed

    Lei, Chengwei; Ruan, Jianhua

    2009-01-01

    Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.

  6. A new compound arithmetic crossover-based genetic algorithm for constrained optimisation in enterprise systems

    NASA Astrophysics Data System (ADS)

    Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu

    2017-01-01

    In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.

  7. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  9. Integration of PGD-virtual charts into an engineering design process

    NASA Astrophysics Data System (ADS)

    Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic

    2016-04-01

    This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.

  10. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  11. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  12. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  13. Modulation aware cluster size optimisation in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  14. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.

  15. Zipf's Law of Abbreviation and the Principle of Least Effort: Language users optimise a miniature lexicon for efficient communication.

    PubMed

    Kanwal, Jasmeen; Smith, Kenny; Culbertson, Jennifer; Kirby, Simon

    2017-08-01

    The linguist George Kingsley Zipf made a now classic observation about the relationship between a word's length and its frequency; the more frequent a word is, the shorter it tends to be. He claimed that this "Law of Abbreviation" is a universal structural property of language. The Law of Abbreviation has since been documented in a wide range of human languages, and extended to animal communication systems and even computer programming languages. Zipf hypothesised that this universal design feature arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently-his famous Principle of Least Effort. In this study, we use a miniature artificial language learning paradigm to provide direct experimental evidence for this explanatory hypothesis. We show that language users optimise form-meaning mappings only when pressures for accuracy and efficiency both operate during a communicative task, supporting Zipf's conjecture that the Principle of Least Effort can explain this universal feature of word length distributions. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  17. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  18. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  19. Improving target coverage and organ-at-risk sparing in intensity-modulated radiotherapy for cervical oesophageal cancer using a simple optimisation method.

    PubMed

    Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi

    2015-01-01

    To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.

  20. Convex relaxations for gas expansion planning

    DOE PAGES

    Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...

    2016-01-01

    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less

  1. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  2. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  3. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  4. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  5. A framework for the computer-aided planning and optimisation of manufacturing processes for components with functional graded properties

    NASA Astrophysics Data System (ADS)

    Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.

    2014-05-01

    In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.

  6. Optimisation of a parallel ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. Very high frame rate volumetric integration of depth images on mobile devices.

    PubMed

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  8. Efficiency improvement of technological preparation of power equipment manufacturing

    NASA Astrophysics Data System (ADS)

    Milukov, I. A.; Rogalev, A. N.; Sokolov, V. P.; Shevchenko, I. V.

    2017-11-01

    Competitiveness of power equipment primarily depends on speeding-up the development and mastering of new equipment samples and technologies, enhancement of organisation and management of design, manufacturing and operation. Actual political, technological and economic conditions cause the acute need in changing the strategy and tactics of process planning. At that the issues of maintenance of equipment with simultaneous improvement of its efficiency and compatibility to domestically produced components are considering. In order to solve these problems, using the systems of computer-aided process planning for process design at all stages of power equipment life cycle is economically viable. Computer-aided process planning is developed for the purpose of improvement of process planning by using mathematical methods and optimisation of design and management processes on the basis of CALS technologies, which allows for simultaneous process design, process planning organisation and management based on mathematical and physical modelling of interrelated design objects and production system. An integration of computer-aided systems providing the interaction of informative and material processes at all stages of product life cycle is proposed as effective solution to the challenges in new equipment design and process planning.

  9. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  10. ATLAS software configuration and build tool optimisation

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.

  11. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  12. Lap time simulation and design optimisation of a brushed DC electric motorcycle for the Isle of Man TT Zero Challenge

    NASA Astrophysics Data System (ADS)

    Dal Bianco, N.; Lot, R.; Matthys, K.

    2018-01-01

    This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.

  13. Efficient computation of turbulent flow in ribbed passages using a non-overlapping near-wall domain decomposition method

    NASA Astrophysics Data System (ADS)

    Jones, Adam; Utyuzhnikov, Sergey

    2017-08-01

    Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.

  14. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  15. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  16. Thermal buckling optimisation of composite plates using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Kamarian, S.; Shakeri, M.; Yas, M. H.

    2017-07-01

    Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.

  17. Efficient characterisation of large deviations using population dynamics

    NASA Astrophysics Data System (ADS)

    Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.

    2018-05-01

    We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.

  18. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  19. Achieving optimal SERS through enhanced experimental design.

    PubMed

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  20. Cluster-based adaptive power control protocol using Hidden Markov Model for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Vinutha, C. B.; Nalini, N.; Nagaraja, M.

    2017-06-01

    This paper presents strategies for an efficient and dynamic transmission power control technique, in order to reduce packet drop and hence energy consumption of power-hungry sensor nodes operated in highly non-linear channel conditions of Wireless Sensor Networks. Besides, we also focus to prolong network lifetime and scalability by designing cluster-based network structure. Specifically we consider weight-based clustering approach wherein, minimum significant node is chosen as Cluster Head (CH) which is computed stemmed from the factors distance, remaining residual battery power and received signal strength (RSS). Further, transmission power control schemes to fit into dynamic channel conditions are meticulously implemented using Hidden Markov Model (HMM) where probability transition matrix is formulated based on the observed RSS measurements. Typically, CH estimates initial transmission power of its cluster members (CMs) from RSS using HMM and broadcast this value to its CMs for initialising their power value. Further, if CH finds that there are variations in link quality and RSS of the CMs, it again re-computes and optimises the transmission power level of the nodes using HMM to avoid packet loss due noise interference. We have demonstrated our simulation results to prove that our technique efficiently controls the power levels of sensing nodes to save significant quantity of energy for different sized network.

  1. Optimisation of synergistic biomass-degrading enzyme systems for efficient rice straw hydrolysis using an experimental mixture design.

    PubMed

    Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat

    2012-09-01

    Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Optimised analytical models of the dielectric properties of biological tissue.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin

    2017-05-01

    The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  4. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  5. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  6. Escalated convergent artificial bee colony

    NASA Astrophysics Data System (ADS)

    Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu

    2016-03-01

    Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.

  7. Identification of the contribution of contact and aerial biomechanical parameters in acrobatic performance

    PubMed Central

    Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël

    2017-01-01

    Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954

  8. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  9. On the analysis of using 3-coil wireless power transfer system in retinal prosthesis.

    PubMed

    Bai, Shun; Skafidas, Stan

    2014-01-01

    Designing a wireless power transmission system(WPTS) using inductive coupling has been investigated extensively in the last decade. Depending on the different configurations of the coupling system, there have been various designing methods to optimise the power transmission efficiency based on the tuning circuitry, quality factor optimisation and geometrical configuration. Recently, a 3-coil WPTS was introduced in retinal prosthesis to overcome the low power transferring efficiency due to low coupling coefficient. Here we present a method to analyse this 3-coil WPTS using the S-parameters to directly obtain maximum achievable power transferring efficiency. Through electromagnetic simulation, we brought a question on the condition of improvement using 3-coil WPTS in powering retinal prosthesis.

  10. Optimising the neutron environment of Radiation Portal Monitors: A computational study

    NASA Astrophysics Data System (ADS)

    Gilbert, Mark R.; Ghani, Zamir; McMillan, John E.; Packer, Lee W.

    2015-09-01

    Efficient and reliable detection of radiological or nuclear threats is a crucial part of national and international efforts to prevent terrorist activities. Radiation Portal Monitors (RPMs), which are deployed worldwide, are intended to interdict smuggled fissile material by detecting emissions of neutrons and gamma rays. However, considering the range and variety of threat sources, vehicular and shielding scenarios, and that only a small signature is present, it is important that the design of the RPMs allows these signatures to be accurately differentiated from the environmental background. Using Monte-Carlo neutron-transport simulations of a model 3He detector system we have conducted a parameter study to identify the optimum combination of detector shielding, moderation, and collimation that maximises the sensitivity of neutron-sensitive RPMs. These structures, which could be simply and cost-effectively added to existing RPMs, can improve the detector response by more than a factor of two relative to an unmodified, bare design. Furthermore, optimisation of the air gap surrounding the helium tubes also improves detector efficiency.

  11. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  12. Symmetric digit sets for elliptic curve scalar multiplication without precomputation

    PubMed Central

    Heuberger, Clemens; Mazzoli, Michela

    2014-01-01

    We describe a method to perform scalar multiplication on two classes of ordinary elliptic curves, namely E:y2=x3+Ax in prime characteristic p≡1mod4, and E:y2=x3+B in prime characteristic p≡1mod3. On these curves, the 4-th and 6-th roots of unity act as (computationally efficient) endomorphisms. In order to optimise the scalar multiplication, we consider a width-w-NAF (Non-Adjacent Form) digit expansion of positive integers to the complex base of τ, where τ is a zero of the characteristic polynomial x2−tx+p of the Frobenius endomorphism associated to the curve. We provide a precomputationless algorithm by means of a convenient factorisation of the unit group of residue classes modulo τ in the endomorphism ring, whereby we construct a digit set consisting of powers of subgroup generators, which are chosen as efficient endomorphisms of the curve. PMID:25190900

  13. Optimising the Parallelisation of OpenFOAM Simulations

    DTIC Science & Technology

    2014-06-01

    UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI

  14. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  15. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  16. Computational aero-acoustics for fan duct propagation and radiation. Current status and application to turbofan liner optimisation

    NASA Astrophysics Data System (ADS)

    Astley, R. J.; Sugimoto, R.; Mustafi, P.

    2011-08-01

    Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.

  17. A pragmatic method for electronic medical record-based observational studies: developing an electronic medical records retrieval system for clinical research

    PubMed Central

    Yamamoto, Keiichi; Sumi, Eriko; Yamazaki, Toru; Asai, Keita; Yamori, Masashi; Teramukai, Satoshi; Bessho, Kazuhisa; Yokode, Masayuki; Fukushima, Masanori

    2012-01-01

    Objective The use of electronic medical record (EMR) data is necessary to improve clinical research efficiency. However, it is not easy to identify patients who meet research eligibility criteria and collect the necessary information from EMRs because the data collection process must integrate various techniques, including the development of a data warehouse and translation of eligibility criteria into computable criteria. This research aimed to demonstrate an electronic medical records retrieval system (ERS) and an example of a hospital-based cohort study that identified both patients and exposure with an ERS. We also evaluated the feasibility and usefulness of the method. Design The system was developed and evaluated. Participants In total, 800 000 cases of clinical information stored in EMRs at our hospital were used. Primary and secondary outcome measures The feasibility and usefulness of the ERS, the method to convert text from eligible criteria to computable criteria, and a confirmation method to increase research data accuracy. Results To comprehensively and efficiently collect information from patients participating in clinical research, we developed an ERS. To create the ERS database, we designed a multidimensional data model optimised for patient identification. We also devised practical methods to translate narrative eligibility criteria into computable parameters. We applied the system to an actual hospital-based cohort study performed at our hospital and converted the test results into computable criteria. Based on this information, we identified eligible patients and extracted data necessary for confirmation by our investigators and for statistical analyses with our ERS. Conclusions We propose a pragmatic methodology to identify patients from EMRs who meet clinical research eligibility criteria. Our ERS allowed for the efficient collection of information on the eligibility of a given patient, reduced the labour required from the investigators and improved the reliability of the results. PMID:23117567

  18. On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish

    2016-04-01

    A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.

  19. A domain specific language for performance portable molecular dynamics algorithms

    NASA Astrophysics Data System (ADS)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  20. A shrinking hypersphere PSO for engineering optimisation problems

    NASA Astrophysics Data System (ADS)

    Yadav, Anupam; Deep, Kusum

    2016-03-01

    Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.

  1. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  2. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  3. Thermal modelling and optimisation of total useful energy rate of Joule-Brayton reheat cogeneration cycle

    NASA Astrophysics Data System (ADS)

    Dubey, M.; Chandra, H.; Kumar, Anil

    2016-02-01

    A thermal modelling for the performance evaluation of gas turbine cogeneration system with reheat is presented in this paper. The Joule-Brayton cogeneration reheat cycle is based on the total useful energy rate (TUER) has been optimised and the efficiency at the maximum TUER is determined. The variation of maximum dimensionless TUER and efficiency at maximum TUER with respect to cycle temperature ratio have also been analysed. From the results, it has been found that the dimensionless maximum TUER and the corresponding thermal efficiency decrease with the increase in power to heat ratio. The result also shows that the inclusion of reheat significantly improves the overall performance of the cycle. From the thermodynamic performance point of view, this methodology may be quite useful in the selection and comparison of combined energy production systems.

  4. Evaluating clustering methods within the Artificial Ecosystem Algorithm and their application to bike redistribution in London.

    PubMed

    Adham, Manal T; Bentley, Peter J

    2016-08-01

    This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. A Method for Decentralised Optimisation in Networks

    NASA Astrophysics Data System (ADS)

    Saramäki, Jari

    2005-06-01

    We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.

  6. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  7. Biomass supply chain optimisation for Organosolv-based biorefineries.

    PubMed

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning

    DOE PAGES

    Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...

    2016-04-26

    A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.

  9. A covert attention P300-based brain-computer interface: Geospell.

    PubMed

    Aloise, Fabio; Aricò, Pietro; Schettini, Francesca; Riccio, Angela; Salinari, Serenella; Mattia, Donatella; Babiloni, Fabio; Cincotti, Febo

    2012-01-01

    The Farwell and Donchin P300 speller interface is one of the most widely used brain-computer interface (BCI) paradigms for writing text. Recent studies have shown that the recognition accuracy of the P300 speller decreases significantly when eye movement is impaired. This report introduces the GeoSpell interface (Geometric Speller), which implements a stimulation framework for a P300-based BCI that has been optimised for operation in covert visual attention. We compared the Geospell with the P300 speller interface under overt attention conditions with regard to effectiveness, efficiency and user satisfaction. Ten healthy subjects participated in the study. The performance of the GeoSpell interface in covert attention was comparable with that of the P300 speller in overt attention. As expected, the effectiveness of the spelling decreased with the new interface in covert attention. The NASA task load index (TLX) for workload assessment did not differ significantly between the two modalities. This study introduces and evaluates a gaze-independent, P300-based brain-computer interface, the efficacy and user satisfaction of which were comparable with those off the classical P300 speller. Despite a decrease in effectiveness due to the use of covert attention, the performance of the GeoSpell far exceeded the threshold of accuracy with regard to effective spelling.

  10. Petri-net-based 2D design of DNA walker circuits.

    PubMed

    Gilbert, David; Heiner, Monika; Rohr, Christian

    2018-01-01

    We consider localised DNA computation, where a DNA strand walks along a binary decision graph to compute a binary function. One of the challenges for the design of reliable walker circuits consists in leakage transitions, which occur when a walker jumps into another branch of the decision graph. We automatically identify leakage transitions, which allows for a detailed qualitative and quantitative assessment of circuit designs, design comparison, and design optimisation. The ability to identify leakage transitions is an important step in the process of optimising DNA circuit layouts where the aim is to minimise the computational error inherent in a circuit while minimising the area of the circuit. Our 2D modelling approach of DNA walker circuits relies on coloured stochastic Petri nets which enable functionality, topology and dimensionality all to be integrated in one two-dimensional model. Our modelling and analysis approach can be easily extended to 3-dimensional walker systems.

  11. Efficient methods for enol phosphate synthesis using carbon-centred magnesium bases.

    PubMed

    Kerr, William J; Lindsay, David M; Patel, Vipulkumar K; Rajamanickam, Muralikrishnan

    2015-10-28

    Efficient conversion of ketones into kinetic enol phosphates under mild and accessible conditions has been realised using the developed methods with di-tert-butylmagnesium and bismesitylmagnesium. Optimisation of the quench protocol resulted in high yields of enol phosphates from a range of cyclohexanones and aryl methyl ketones, with tolerance of a range of additional functional units.

  12. An introduction to quantum machine learning

    NASA Astrophysics Data System (ADS)

    Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco

    2015-04-01

    Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.

  13. Two-machine flow shop scheduling integrated with preventive maintenance planning

    NASA Astrophysics Data System (ADS)

    Wang, Shijin; Liu, Ming

    2016-02-01

    This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.

  14. A universal preconditioner for simulating condensed phase materials.

    PubMed

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  15. A universal preconditioner for simulating condensed phase materials

    NASA Astrophysics Data System (ADS)

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-01

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  16. A soft computing-based approach to optimise queuing-inventory control problem

    NASA Astrophysics Data System (ADS)

    Alaghebandha, Mohammad; Hajipour, Vahid

    2015-04-01

    In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.

  17. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  18. A chaotic model for advertising diffusion problem with competition

    NASA Astrophysics Data System (ADS)

    Ip, W. H.; Yung, K. L.; Wang, Dingwei

    2012-08-01

    In this article, the author extends Dawid and Feichtinger's chaotic advertising diffusion model into the duopoly case. A computer simulation system is used to test this enhanced model. Based on the analysis of simulation results, it is found that the best advertising strategy in duopoly is to increase the advertising investment to reach the best Win-Win situation where the oscillation of market portion will not occur. In order to effectively arrive at the best situation, we define a synthetic index and two thresholds. An estimation method for the parameters of the index and thresholds is proposed in this research. We can reach the Win-Win situation by simply selecting the control parameters to make the synthetic index close to the threshold of min-oscillation state. The numerical example and computational results indicated that the proposed chaotic model is useful to describe and analyse advertising diffusion process in duopoly, it is an efficient tool for the selection and optimisation of advertising strategy.

  19. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  20. Optimisation of preparation conditions and properties of phytosterol liposome-encapsulating nattokinase.

    PubMed

    Dong, Xu-Yan; Kong, Fan-Pi; Yuan, Gang-You; Wei, Fang; Jiang, Mu-Lan; Li, Guang-Ming; Wang, Zhan; Zhao, Yuan-Di; Chen, Hong

    2012-01-01

    Phytosterol liposomes were prepared using the thin film method and used to encapsulate nattokinase (NK). In order to obtain a high encapsulation efficiency within the liposome, an orthogonal experiment (L9 (3)(4)) was applied to optimise the preparation conditions. The molar ratio of lecithin to phytosterols, NK activity and mass ratio of mannite to lecithin were the main factors that influenced the encapsulation efficiency of the liposomes. Based on the results of a single-factor test, these three factors were chosen for this study. We determined the optimum extraction conditions to be as follows: a molar ratio of lecithin to phytosterol of 2 : 1, NK activity of 2500 U mL⁻¹ and a mass ratio of mannite to lecithin of 3 : 1. Under these optimised conditions, an encapsulation efficiency of 65.25% was achieved, which agreed closely with the predicted result. Moreover, the zeta potential, size distribution and microstructure of the liposomes prepared were measured, and we found that the zeta potential was -51 ± 3 mV and the mean diameter was 194.1 nm. From the results of the scanning electron microscopy, we observed that the phytosterol liposomes were round and regular in shape and showed no aggregation.

  1. A method to incorporate the effect of beam quality on image noise in a digitally reconstructed radiograph (DRR) based computer simulation for optimisation of digital radiography

    NASA Astrophysics Data System (ADS)

    Moore, Craig S.; Wood, Tim J.; Saunderson, John R.; Beavis, Andrew W.

    2017-09-01

    The use of computer simulated digital x-radiographs for optimisation purposes has become widespread in recent years. To make these optimisation investigations effective, it is vital simulated radiographs contain accurate anatomical and system noise. Computer algorithms that simulate radiographs based solely on the incident detector x-ray intensity (‘dose’) have been reported extensively in the literature. However, while it has been established for digital mammography that x-ray beam quality is an important factor when modelling noise in simulated images there are no such studies for diagnostic imaging of the chest, abdomen and pelvis. This study investigates the influence of beam quality on image noise in a digital radiography (DR) imaging system, and incorporates these effects into a digitally reconstructed radiograph (DRR) computer simulator. Image noise was measured on a real DR imaging system as a function of dose (absorbed energy) over a range of clinically relevant beam qualities. Simulated ‘absorbed energy’ and ‘beam quality’ DRRs were then created for each patient and tube voltage under investigation. Simulated noise images, corrected for dose and beam quality, were subsequently produced from the absorbed energy and beam quality DRRs, using the measured noise, absorbed energy and beam quality relationships. The noise images were superimposed onto the noiseless absorbed energy DRRs to create the final images. Signal-to-noise measurements in simulated chest, abdomen and spine images were within 10% of the corresponding measurements in real images. This compares favourably to our previous algorithm where images corrected for dose only were all within 20%.

  2. Optimisation and validation of a rapid and efficient microemulsion liquid chromatographic (MELC) method for the determination of paracetamol (acetaminophen) content in a suppository formulation.

    PubMed

    McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin

    2007-05-09

    A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.

  3. Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms.

    PubMed

    Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter

    2007-01-01

    A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.

  4. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.

  5. Local pursuit strategy-inspired cooperative trajectory planning algorithm for a class of nonlinear constrained dynamical systems

    NASA Astrophysics Data System (ADS)

    Xu, Yunjun; Remeikas, Charles; Pham, Khanh

    2014-03-01

    Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.

  6. Optimising ICT Effectiveness in Instruction and Learning: Multilevel Transformation Theory and a Pilot Project in Secondary Education

    ERIC Educational Resources Information Center

    Mooij, Ton

    2004-01-01

    Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…

  7. On the dynamic rounding-off in analogue and RF optimal circuit sizing

    NASA Astrophysics Data System (ADS)

    Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena

    2014-04-01

    Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.

  8. Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.

    2017-08-01

    We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95  <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase  <200 ms and for changes in the breathing period of  <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.

  9. Efficient processing of multiple nested event pattern queries over multi-dimensional event streams based on a triaxial hierarchical model.

    PubMed

    Xiao, Fuyuan; Aritsugi, Masayoshi; Wang, Qing; Zhang, Rong

    2016-09-01

    For efficient and sophisticated analysis of complex event patterns that appear in streams of big data from health care information systems and support for decision-making, a triaxial hierarchical model is proposed in this paper. Our triaxial hierarchical model is developed by focusing on hierarchies among nested event pattern queries with an event concept hierarchy, thereby allowing us to identify the relationships among the expressions and sub-expressions of the queries extensively. We devise a cost-based heuristic by means of the triaxial hierarchical model to find an optimised query execution plan in terms of the costs of both the operators and the communications between them. According to the triaxial hierarchical model, we can also calculate how to reuse the results of the common sub-expressions in multiple queries. By integrating the optimised query execution plan with the reuse schemes, a multi-query optimisation strategy is developed to accomplish efficient processing of multiple nested event pattern queries. We present empirical studies in which the performance of multi-query optimisation strategy was examined under various stream input rates and workloads. Specifically, the workloads of pattern queries can be used for supporting monitoring patients' conditions. On the other hand, experiments with varying input rates of streams can correspond to changes of the numbers of patients that a system should manage, whereas burst input rates can correspond to changes of rushes of patients to be taken care of. The experimental results have shown that, in Workload 1, our proposal can improve about 4 and 2 times throughput comparing with the relative works, respectively; in Workload 2, our proposal can improve about 3 and 2 times throughput comparing with the relative works, respectively; in Workload 3, our proposal can improve about 6 times throughput comparing with the relative work. The experimental results demonstrated that our proposal was able to process complex queries efficiently which can support health information systems and further decision-making. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. A hybrid credibility-based fuzzy multiple objective optimisation to differential pricing and inventory policies with arbitrage consideration

    NASA Astrophysics Data System (ADS)

    Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.

    2015-10-01

    In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.

  11. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  12. Novel Approach on the Optimisation of Mid-Course Corrections Along Interplanetary Trajectories

    NASA Astrophysics Data System (ADS)

    Iorfida, Elisabetta; Palmer, Phil; Roberts, Mark

    The primer vector theory, firstly proposed by Lawden, defines a set of necessary conditions to characterise whether an impulsive thrust trajectory is optimal with respect to propellant usage, within a two-body problem context. If the conditions are not satisfied, one or more potential intermediate impulses are performed along the transfer arc, in order to lower the overall cost. The method is based on the propagation of the state transition matrix and on the solution of a boundary value problem, which leads to a mathematical and computational complexity.In this paper, a different approach is introduced. It is based on a polar coordinates transformation of the primer vector which allows the decoupling between its in-plane and out-of-plane components. The out-of-plane component is solved analytically while for the in-plane ones a Hamiltonian approximation is made.The novel procedure reduces the mathematical complexity and the computational cost of Lawden's problem and gives also a different perspective about the optimisation of a transfer trajectory.

  13. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals.

    PubMed

    Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  14. Using the Person-Based Approach to optimise a digital intervention for the management of hypertension

    PubMed Central

    Morton, Katherine; Band, Rebecca; van Woezik, Anne; Grist, Rebecca; McManus, Richard J.; Little, Paul; Yardley, Lucy

    2018-01-01

    Background For behaviour-change interventions to be successful they must be acceptable to users and overcome barriers to behaviour change. The Person-Based Approach can help to optimise interventions to maximise acceptability and engagement. This article presents a novel, efficient and systematic method that can be used as part of the Person-Based Approach to rapidly analyse data from development studies to inform intervention modifications. We describe how we used this approach to optimise a digital intervention for patients with hypertension (HOME BP), which aims to implement medication and lifestyle changes to optimise blood pressure control. Methods In study 1, hypertensive patients (N = 12) each participated in three think-aloud interviews, providing feedback on a prototype of HOME BP. In study 2 patients (N = 11) used HOME BP for three weeks and were then interviewed about their experiences. Studies 1 and 2 were used to identify detailed changes to the intervention content and potential barriers to engagement with HOME BP. In study 3 (N = 7) we interviewed hypertensive patients who were not interested in using an intervention like HOME BP to identify potential barriers to uptake, which informed modifications to our recruitment materials. Analysis in all three studies involved detailed tabulation of patient data and comparison to our modification criteria. Results Studies 1 and 2 indicated that the HOME BP procedures were generally viewed as acceptable and feasible, but also highlighted concerns about monitoring blood pressure correctly at home and making medication changes remotely. Patients in study 3 had additional concerns about the safety and security of the intervention. Modifications improved the acceptability of the intervention and recruitment materials. Conclusions This paper provides a detailed illustration of how to use the Person-Based Approach to refine a digital intervention for hypertension. The novel, efficient approach to analysis and criteria for deciding when to implement intervention modifications described here may be useful to others developing interventions. PMID:29723262

  15. Honeybee economics: optimisation of foraging in a variable world.

    PubMed

    Stabentheiner, Anton; Kovac, Helmut

    2016-06-20

    In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.

  16. Surface similarity-based molecular query-retrieval

    PubMed Central

    Singh, Rahul

    2007-01-01

    Background Discerning the similarity between molecules is a challenging problem in drug discovery as well as in molecular biology. The importance of this problem is due to the fact that the biochemical characteristics of a molecule are closely related to its structure. Therefore molecular similarity is a key notion in investigations targeting exploration of molecular structural space, query-retrieval in molecular databases, and structure-activity modelling. Determining molecular similarity is related to the choice of molecular representation. Currently, representations with high descriptive power and physical relevance like 3D surface-based descriptors are available. Information from such representations is both surface-based and volumetric. However, most techniques for determining molecular similarity tend to focus on idealized 2D graph-based descriptors due to the complexity that accompanies reasoning with more elaborate representations. Results This paper addresses the problem of determining similarity when molecules are described using complex surface-based representations. It proposes an intrinsic, spherical representation that systematically maps points on a molecular surface to points on a standard coordinate system (a sphere). Molecular surface properties such as shape, field strengths, and effects due to field super-positioningcan then be captured as distributions on the surface of the sphere. Surface-based molecular similarity is subsequently determined by computing the similarity of the surface-property distributions using a novel formulation of histogram-intersection. The similarity formulation is not only sensitive to the 3D distribution of the surface properties, but is also highly efficient to compute. Conclusion The proposed method obviates the computationally expensive step of molecular pose-optimisation, can incorporate conformational variations, and facilitates highly efficient determination of similarity by directly comparing molecular surfaces and surface-based properties. Retrieval performance, applications in structure-activity modeling of complex biological properties, and comparisons with existing research and commercial methods demonstrate the validity and effectiveness of the approach. PMID:17634096

  17. A universal preconditioner for simulating condensed phase materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Packwood, David; Ortner, Christoph, E-mail: c.ortner@warwick.ac.uk; Kermode, James, E-mail: j.r.kermode@warwick.ac.uk

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor ofmore » two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.« less

  18. The optimisation, design and verification of feed horn structures for future Cosmic Microwave Background missions

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten

    2016-05-01

    In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.

  19. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  20. Exploiting Genomic Knowledge in Optimising Molecular Breeding Programmes: Algorithms from Evolutionary Computing

    PubMed Central

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279

  1. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  2. Characterisation and optimisation of flexible transfer lines for liquid helium. Part I: Experimental results

    NASA Astrophysics Data System (ADS)

    Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.

    2016-04-01

    The transfer of liquid helium (LHe) into mobile dewars or transport vessels is a common and unavoidable process at LHe decant stations. During this transfer reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus generated helium gas needs to be collected and reliquefied which requires a huge amount of electrical energy. Therefore, the design of transfer lines used at LHe decant stations has been optimised to establish a LHe transfer with minor evaporation losses which increases the overall efficiency and capacity of LHe decant stations. This paper presents the experimental results achieved during the thermohydraulic optimisation of a flexible LHe transfer line. An extensive measurement campaign with a set of dedicated transfer lines equipped with pressure and temperature sensors led to unique experimental data of this specific transfer process. The experimental results cover the heat leak, the pressure drop, the transfer rate, the outlet quality, and the cool-down and warm-up behaviour of the examined transfer lines. Based on the obtained results the design of the considered flexible transfer line has been optimised, featuring reduced heat leak and pressure drop.

  3. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less

  4. Optimisation of assembly scheduling in VCIM systems using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Dao, Son Duy; Abhary, Kazem; Marian, Romeo

    2017-09-01

    Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.

  5. On the optimisation of the use of 3He in radiation portal monitors

    NASA Astrophysics Data System (ADS)

    Tomanin, Alice; Peerani, Paolo; Janssens-Maenhout, Greet

    2013-02-01

    Radiation Portal Monitors (RPMs) are used to detect illicit trafficking of nuclear or other radioactive material concealed in vehicles, cargo containers or people at strategic check points, such as borders, seaports and airports. Most of them include neutron detectors for the interception of potential plutonium smuggling. The most common technology used for neutron detection in RPMs is based on 3He proportional counters. The recent severe shortage of this rare and expensive gas has created a problem of capacity for manufacturers to provide enough detectors to satisfy the market demand. In this paper we analyse the design of typical commercial RPMs and try to optimise the detector parameters in order either to maximise the efficiency using the same amount of 3He or minimise the amount of gas needed to reach the same detection performance: by reducing the volume or gas pressure in an optimised design.

  6. Determination of volatile monophenols in beer using acetylation and headspace solid-phase microextraction in combination with gas chromatography and mass spectrometry.

    PubMed

    Sterckx, Femke L; Saison, Daan; Delvaux, Freddy R

    2010-08-31

    Monophenols are widely spread compounds contributing to the flavour of many foods and beverages. They are most likely present in beer, but so far, little is known about their influence on beer flavour. To quantify these monophenols in beer, we optimised a headspace solid-phase microextraction method coupled to gas chromatography-mass spectrometry. To improve their isolation from the beer matrix and their chromatographic properties, the monophenols were acetylated using acetic anhydride and KHCO(3) as derivatising agent and base catalyst, respectively. Derivatisation conditions were optimised with attention for the pH of the reaction medium. Additionally, different parameters affecting extraction efficiency were optimised, including fibre coating, extraction time and temperature and salt addition. Afterwards, we calibrated and validated the method successfully and applied it for the analysis of monophenols in beer samples. 2010 Elsevier B.V. All rights reserved.

  7. A management and optimisation model for water supply planning in water deficit areas

    NASA Astrophysics Data System (ADS)

    Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón

    2014-07-01

    The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.

  8. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  9. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  10. Hardware Design of the Energy Efficient Fall Detection Device

    NASA Astrophysics Data System (ADS)

    Skorodumovs, A.; Avots, E.; Hofmanis, J.; Korāts, G.

    2016-04-01

    Health issues for elderly people may lead to different injuries obtained during simple activities of daily living. Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. In the project "Wireless Sensor Systems in Telecare Application for Elderly People", we have developed a robust fall detection algorithm for a wearable wireless sensor. To optimise the algorithm for hardware performance and test it in field, we have designed an accelerometer based wireless fall detector. Our main considerations were: a) functionality - so that the algorithm can be applied to the chosen hardware, and b) power efficiency - so that it can run for a very long time. We have picked and tested the parts, built a prototype, optimised the firmware for lowest consumption, tested the performance and measured the consumption parameters. In this paper, we discuss our design choices and present the results of our work.

  11. Mechanistic modelling of infrared mediated energy transfer during the primary drying step of a continuous freeze-drying process.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; De Meyer, Laurens; Corver, Jos; Vervaet, Chris; Nopens, Ingmar; De Beer, Thomas

    2017-05-01

    Conventional pharmaceutical freeze-drying is an inefficient and expensive batch-wise process, associated with several disadvantages leading to an uncontrolled end product variability. The proposed continuous alternative, based on spinning the vials during freezing and on optimal energy supply during drying, strongly increases process efficiency and improves product quality (uniformity). The heat transfer during continuous drying of the spin frozen vials is provided via non-contact infrared (IR) radiation. The energy transfer to the spin frozen vials should be optimised to maximise the drying efficiency while avoiding cake collapse. Therefore, a mechanistic model was developed which allows computing the optimal, dynamic IR heater temperature in function of the primary drying progress and which, hence, also allows predicting the primary drying endpoint based on the applied dynamic IR heater temperature. The model was validated by drying spin frozen vials containing the model formulation (3.9mL in 10R vials) according to the computed IR heater temperature profile. In total, 6 validation experiments were conducted. The primary drying endpoint was experimentally determined via in-line near-infrared (NIR) spectroscopy and compared with the endpoint predicted by the model (50min). The mean ratio of the experimental drying time to the predicted value was 0.91, indicating a good agreement between the model predictions and the experimental data. The end product had an elegant product appearance (visual inspection) and an acceptable residual moisture content (Karl Fischer). Copyright © 2017 Elsevier B.V. All rights reserved.

  12. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  13. Implementation study of wearable sensors for activity recognition systems.

    PubMed

    Rezaie, Hamed; Ghassemian, Mona

    2015-08-01

    This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely 'stream-based', 'feature-based' and 'threshold-based' scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency.

  14. Application-specific coarse-grained reconfigurable array: architecture and design methodology

    NASA Astrophysics Data System (ADS)

    Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu

    2015-06-01

    Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.

  15. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    NASA Astrophysics Data System (ADS)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  16. A GPU Simulation Tool for Training and Optimisation in 2D Digital X-Ray Imaging.

    PubMed

    Gallio, Elena; Rampado, Osvaldo; Gianaria, Elena; Bianchi, Silvio Diego; Ropolo, Roberto

    2015-01-01

    Conventional radiology is performed by means of digital detectors, with various types of technology and different performance in terms of efficiency and image quality. Following the arrival of a new digital detector in a radiology department, all the staff involved should adapt the procedure parameters to the properties of the detector, in order to achieve an optimal result in terms of correct diagnostic information and minimum radiation risks for the patient. The aim of this study was to develop and validate a software capable of simulating a digital X-ray imaging system, using graphics processing unit computing. All radiological image components were implemented in this application: an X-ray tube with primary beam, a virtual patient, noise, scatter radiation, a grid and a digital detector. Three different digital detectors (two digital radiography and a computed radiography systems) were implemented. In order to validate the software, we carried out a quantitative comparison of geometrical and anthropomorphic phantom simulated images with those acquired. In terms of average pixel values, the maximum differences were below 15%, while the noise values were in agreement with a maximum difference of 20%. The relative trends of contrast to noise ratio versus beam energy and intensity were well simulated. Total calculation times were below 3 seconds for clinical images with pixel size of actual dimensions less than 0.2 mm. The application proved to be efficient and realistic. Short calculation times and the accuracy of the results obtained make this software a useful tool for training operators and dose optimisation studies.

  17. Improving linear transport infrastructure efficiency by automated learning and optimised predictive maintenance techniques (INFRALERT)

    NASA Astrophysics Data System (ADS)

    Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele

    2017-09-01

    The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.

  18. Computational studies on nonlinear optical property of novel Wittig-based Schiff-base ligands and copper(II) complex

    NASA Astrophysics Data System (ADS)

    Rajasekhar, Bathula; Patowary, Nidarshana; K. Z., Danish; Swu, Toka

    2018-07-01

    Hundred and forty-five novel molecules of Wittig-based Schiff-base (WSB), including copper(II) complex and precursors, were computationally screened for nonlinear optical (NLO) properties. WSB ligands were derived from various categories of amines and aldehydes. Wittig-based precursor aldehydes, (E)-2-hydroxy-5-(4-nitrostyryl)benzaldehyde (f) and 2-hydroxy-5-((1Z,3E)-4-phenylbuta-1,3-dien-1-yl) benzaldehyde (g) were synthesised and spectroscopically confirmed. Schiff-base ligands and copper(II) complex were designed, optimised and their NLO property was studied using GAUSSIAN09 computer program. For both optimisation and hyperpolarisability (finite-field approach) calculations, Density Functional Theory (DFT)-based B3LYP method was applied with LANL2DZ basis set for metal ion and 6-31G* basis set for C, H, N, O and Cl atoms. This is the first report to present the structure-activity relationship between hyperpolarisability (β) and WSB ligands containing mono imine group. The study reveals that Schiff-base ligands of the category N-2, which are the ones derived from the precursor aldehyde, 2-hydroxy-5-(4nitro-styryl)benzaldehyde and pre-polarised WSB coordinated with Cu(II), encoded as Complex-1 (β = 14.671 × 10-30 e.s.u) showed higher β values over other categories, N-1 and N-3, i.e. WSB derived from precursor aldehydes, 2-hydroxy-5-styrylbenzaldehyde and 2-hydroxy-5-((1Z,3E)-4-phenylbuta-1,3-dien-1-yl)benzaldehyde, respectively. For the first time here we report the geometrical isomeric effect on β value.

  19. A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.

    PubMed

    Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie

    2018-06-04

    Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.

  20. The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.

    2012-04-01

    For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.

  1. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  2. A comparison of optimisation methods and knee joint degrees of freedom on muscle force predictions during single-leg hop landings.

    PubMed

    Mokhtarzadeh, Hossein; Perraton, Luke; Fok, Laurence; Muñoz, Mario A; Clark, Ross; Pivonka, Peter; Bryant, Adam L

    2014-09-22

    The aim of this paper was to compare the effect of different optimisation methods and different knee joint degrees of freedom (DOF) on muscle force predictions during a single legged hop. Nineteen subjects performed single-legged hopping manoeuvres and subject-specific musculoskeletal models were developed to predict muscle forces during the movement. Muscle forces were predicted using static optimisation (SO) and computed muscle control (CMC) methods using either 1 or 3 DOF knee joint models. All sagittal and transverse plane joint angles calculated using inverse kinematics or CMC in a 1 DOF or 3 DOF knee were well-matched (RMS error<3°). Biarticular muscles (hamstrings, rectus femoris and gastrocnemius) showed more differences in muscle force profiles when comparing between the different muscle prediction approaches where these muscles showed larger time delays for many of the comparisons. The muscle force magnitudes of vasti, gluteus maximus and gluteus medius were not greatly influenced by the choice of muscle force prediction method with low normalised root mean squared errors (<48%) observed in most comparisons. We conclude that SO and CMC can be used to predict lower-limb muscle co-contraction during hopping movements. However, care must be taken in interpreting the magnitude of force predicted in the biarticular muscles and the soleus, especially when using a 1 DOF knee. Despite this limitation, given that SO is a more robust and computationally efficient method for predicting muscle forces than CMC, we suggest that SO can be used in conjunction with musculoskeletal models that have a 1 or 3 DOF knee joint to study the relative differences and the role of muscles during hopping activities in future studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Information and Efficiency in the Nervous System—A Synthesis

    PubMed Central

    Sengupta, Biswa; Stemmler, Martin B.; Friston, Karl J.

    2013-01-01

    In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components—like genetic circuits, biochemical cascades, and ion channels, among others—enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode—with almost 20–60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma. PMID:23935475

  4. Dynamic VMs placement for energy efficiency by PSO in cloud computing

    NASA Astrophysics Data System (ADS)

    Dashti, Seyed Ebrahim; Rahmani, Amir Masoud

    2016-03-01

    Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.

  5. VIEWDEX: an efficient and easy-to-use software for observer performance studies.

    PubMed

    Håkansson, Markus; Svensson, Sune; Zachrisson, Sara; Svalkvist, Angelica; Båth, Magnus; Månsson, Lars Gunnar

    2010-01-01

    The development of investigation techniques, image processing, workstation monitors, analysing tools etc. within the field of radiology is vast, and the need for efficient tools in the evaluation and optimisation process of image and investigation quality is important. ViewDEX (Viewer for Digital Evaluation of X-ray images) is an image viewer and task manager suitable for research and optimisation tasks in medical imaging. ViewDEX is DICOM compatible and the features of the interface (tasks, image handling and functionality) are general and flexible. The configuration of a study and output (for example, answers given) can be edited in any text editor. ViewDEX is developed in Java and can run from any disc area connected to a computer. It is free to use for non-commercial purposes and can be downloaded from http://www.vgregion.se/sas/viewdex. In the present work, an evaluation of the efficiency of ViewDEX for receiver operating characteristic (ROC) studies, free-response ROC (FROC) studies and visual grading (VG) studies was conducted. For VG studies, the total scoring rate was dependent on the number of criteria per case. A scoring rate of approximately 150 cases h(-1) can be expected for a typical VG study using single images and five anatomical criteria. For ROC and FROC studies using clinical images, the scoring rate was approximately 100 cases h(-1) using single images and approximately 25 cases h(-1) using image stacks ( approximately 50 images case(-1)). In conclusion, ViewDEX is an efficient and easy-to-use software for observer performance studies.

  6. Computer-based teaching module design: principles derived from learning theories.

    PubMed

    Lau, K H Vincent

    2014-03-01

    The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to take advantage of this unique teaching format as it gains increasing importance in medical education. © 2014 John Wiley & Sons Ltd.

  7. Optimisation of SIW bandpass filter with wide and sharp stopband using space mapping

    NASA Astrophysics Data System (ADS)

    Xu, Juan; Bi, Jun Jian; Li, Zhao Long; Chen, Ru shan

    2016-12-01

    This work presents a substrate integrated waveguide (SIW) bandpass filter with wide and precipitous stopband, which is different from filters with a direct input/output coupling structure. Higher modes in the SIW cavities are used to generate the finite transmission zeros for improved stopband performance. The design of SIW filters requires full wave electromagnetic simulation and extensive optimisation. If a full wave solver is used for optimisation, the design process is very time consuming. The space mapping (SM) approach has been called upon to alleviate this problem. In this case, the coarse model is optimised using an equivalent circuit model-based representation of the structure for fast computations. On the other hand, the verification of the design is completed with an accurate fine model full wave simulation. A fourth-order filter with a passband of 12.0-12.5 GHz is fabricated on a single layer Rogers RT/Duroid 5880 substrate. The return loss is better than 17.4 dB in the passband and the rejection is more than 40 dB in the stopband. The stopband is from 2 to 11 GHz and 13.5 to 17.3 GHz, demonstrating a wide bandwidth performance.

  8. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  9. Analyzing parameters optimisation in minimising warpage on side arm using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    This paper presents a systematic methodology to analyse the warpage of the side arm part using Autodesk Moldflow Insight software. Response Surface Methodology (RSM) was proposed to optimise the processing parameters that will result in optimal solutions by efficiently minimising the warpage of the side arm part. The variable parameters considered in this study was based on most significant parameters affecting warpage stated by previous researchers, that is melt temperature, mould temperature and packing pressure while adding packing time and cooling time as these is the commonly used parameters by researchers. The results show that warpage was improved by 10.15% and the most significant parameters affecting warpage are packing pressure.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozhdestvensky, Yu V

    The possibility is studied for obtaining intense cold atomic beams by using the Renyi entropy to optimise the laser cooling process. It is shown in the case of a Gaussian velocity distribution of atoms, the Renyi entropy coincides with the density of particles in the phase space. The optimisation procedure for cooling atoms by resonance optical radiation is described, which is based on the thermodynamic law of increasing the Renyi entropy in time. Our method is compared with the known methods for increasing the laser cooling efficiency such as the tuning of a laser frequency in time and a changemore » of the atomic transition frequency in an inhomogeneous transverse field of a magnetic solenoid. (laser cooling)« less

  11. Optimisation of X-ray emission from a laser plasma source for the realisation of microbeam in sub-keV region.

    PubMed

    Di Paolo Emilio, M; Festuccia, R; Palladino, L

    2015-09-01

    In this work, the X-ray emission generated from a plasma produced by focusing Nd-YAG laser beam on the Mylar and Yttrium targets will be characterised. The goal is to reach the best condition that optimises the X-ray conversion efficiency at 500 eV (pre-edge of the Oxigen K-shell), strongly absorbed by carbon-based structures. The characteristics of the microbeam optical system, the software/hardware control and the preliminary measurements of the X-ray fluence will be presented. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Multi-objective thermodynamic optimisation of supercritical CO2 Brayton cycles integrated with solar central receivers

    NASA Astrophysics Data System (ADS)

    Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes

    2018-01-01

    In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.

  13. Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems

    NASA Astrophysics Data System (ADS)

    Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.

    2015-05-01

    Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.

  14. An integrated modelling and multicriteria analysis approach to managing nitrate diffuse pollution: 2. A case study for a chalk catchment in England.

    PubMed

    Koo, B K; O'Connell, P E

    2006-04-01

    The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.

  15. Optimisation of fluorescence guidance during robot-assisted laparoscopic sentinel node biopsy for prostate cancer.

    PubMed

    KleinJan, Gijs H; van den Berg, Nynke S; Brouwer, Oscar R; de Jong, Jeroen; Acar, Cenk; Wit, Esther M; Vegt, Erik; van der Noort, Vincent; Valdés Olmos, Renato A; van Leeuwen, Fijs W B; van der Poel, Henk G

    2014-12-01

    The hybrid tracer was introduced to complement intraoperative radiotracing towards the sentinel nodes (SNs) with fluorescence guidance. Improve in vivo fluorescence-based SN identification for prostate cancer by optimising hybrid tracer preparation, injection technique, and fluorescence imaging hardware. Forty patients with a Briganti nomogram-based risk >10% of lymph node (LN) metastases were included. After intraprostatic tracer injection, SN mapping was performed (lymphoscintigraphy and single-photon emission computed tomography with computed tomography (SPECT-CT)). In groups 1 and 2, SNs were pursued intraoperatively using a laparoscopic gamma probe followed by fluorescence imaging (FI). In group 3, SNs were initially located via FI. Compared with group 1, in groups 2 and 3, a new tracer formulation was introduced that had a reduced total injected volume (2.0 ml vs. 3.2 ml) but increased particle concentration. For groups 1 and 2, the Tricam SLII with D-Light C laparoscopic FI (LFI) system was used. In group 3, the LFI system was upgraded to an Image 1 HUB HD with D-Light P system. Hybrid tracer-based SN biopsy, extended pelvic lymph node dissection, and robot-assisted radical prostatectomy. Number and location of the preoperatively identified SNs, in vivo fluorescence-based SN identification rate, tumour status of SNs and LNs, postoperative complications, and biochemical recurrence (BCR). Mean fluorescence-based SN identification improved from 63.7% (group 1) to 85.2% and 93.5% for groups 2 and 3, respectively (p=0.012). No differences in postoperative complications were found. BCR occurred in three pN0 patients. Stepwise optimisation of the hybrid tracer formulation and the LFI system led to a significant improvement in fluorescence-assisted SN identification. Preoperative SPECT-CT remained essential for guiding intraoperative SN localisation. Intraoperative fluorescence-based SN visualisation can be improved by enhancing the hybrid tracer formulation and laparoscopic fluorescence imaging system. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  16. Understanding the Models of Community Hospital rehabilitation Activity (MoCHA): a mixed-methods study

    PubMed Central

    Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen

    2017-01-01

    Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766

  17. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C.; Hine, N. D. M.

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on amore » small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.« less

  18. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  19. Design of safety-oriented control allocation strategies for overactuated electric vehicles

    NASA Astrophysics Data System (ADS)

    de Castro, Ricardo; Tanelli, Mara; Esteves Araújo, Rui; Savaresi, Sergio M.

    2014-08-01

    The new vehicle platforms for electric vehicles (EVs) that are becoming available are characterised by actuator redundancy, which makes it possible to jointly optimise different aspects of the vehicle motion. To do this, high-level control objectives are first specified and solved with appropriate control strategies. Then, the resulting virtual control action must be translated into actual actuator commands by a control allocation layer that takes care of computing the forces to be applied at the wheels. This step, in general, is quite demanding as far as computational complexity is considered. In this work, a safety-oriented approach to this problem is proposed. Specifically, a four-wheel steer EV with four in-wheel motors is considered, and the high-level motion controller is designed within a sliding mode framework with conditional integrators. For distributing the forces among the tyres, two control allocation approaches are investigated. The first, based on the extension of the cascading generalised inverse method, is computationally efficient but shows some limitations in dealing with unfeasible force values. To solve the problem, a second allocation algorithm is proposed, which relies on the linearisation of the tyre-road friction constraints. Extensive tests, carried out in the CarSim simulation environment, demonstrate the effectiveness of the proposed approach.

  20. Optimising resource management in neurorehabilitation.

    PubMed

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  1. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  2. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    NASA Astrophysics Data System (ADS)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  3. Ontology-based coupled optimisation design method using state-space analysis for the spindle box system of large ultra-precision optical grinding machine

    NASA Astrophysics Data System (ADS)

    Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian

    2017-08-01

    With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.

  4. The optimisation of the laser-induced forward transfer process for fabrication of polyfluorene-based organic light-emitting diode pixels

    NASA Astrophysics Data System (ADS)

    Shaw-Stewart, James; Mattle, Thomas; Lippert, Thomas; Nagel, Matthias; Nüesch, Frank; Wokaun, Alexander

    2013-08-01

    Laser-induced forward transfer (LIFT) has already been used to fabricate various types of organic light-emitting diodes (OLEDs), and the process itself has been optimised and refined considerably since OLED pixels were first demonstrated. In particular, a dynamic release layer (DRL) of triazene polymer has been used, the environmental pressure has been reduced down to a medium vacuum, and the donor receiver gap has been controlled with the use of spacers. Insight into the LIFT process's effect upon OLED pixel performance is presented here, obtained through optimisation of three-colour polyfluorene-based OLEDs. A marked dependence of the pixel morphology quality on the cathode metal is observed, and the laser transfer fluence dependence is also analysed. The pixel device performances are compared to conventionally fabricated devices, and cathode effects have been looked at in detail. The silver cathode pixels show more heterogeneous pixel morphologies, and a correspondingly poorer efficiency characteristics. The aluminium cathode pixels have greater green electroluminescent emission than both the silver cathode pixels and the conventionally fabricated aluminium devices, and the green emission has a fluence dependence for silver cathode pixels.

  5. Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM) and genetic algorithm method (GA)

    NASA Astrophysics Data System (ADS)

    Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.

  6. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.

  7. Monitoring the enrichment of virgin olive oil with natural antioxidants by using a new capillary electrophoresis method.

    PubMed

    Nevado, Juan José Berzas; Robledo, Virginia Rodríguez; Callado, Carolina Sánchez-Carnerero

    2012-07-15

    The enrichment of virgin olive oil (VOO) with natural antioxidants contained in various herbs (rosemary, thyme and oregano) was studied. Three different enrichment procedures were used for the solid-liquid extraction of antioxidants present in the herbs to VOO. One involved simply bringing the herbs into contact with the VOO for 190 days; another keeping the herb-VOO mixture under stirring at room temperature (25°C) for 11 days; and the third stirring at temperatures above room level (35-40°C). The efficiency of each procedure was assessed by using a reproducible, efficient, reliable analytical capillary zone electrophoresis (CZE) method to separate and determine selected phenolic compounds (rosmarinic and caffeic acid) in the oil. Prior to electrophoretic separation, the studied antioxidants were isolated from the VOO matrix by using an optimised preconcentration procedure based on solid phase extraction (SPE). The CZE method was optimised and validated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation.

    PubMed

    Ertel, K; Banerjee, S; Mason, P D; Phillips, P J; Siebold, M; Hernandez-Gomez, C; Collier, J C

    2011-12-19

    We present a numerical model of a pulsed, diode-pumped Yb:YAG laser amplifier for the generation of high energy ns-pulses. This model is used to explore how optical-to-optical efficiency depends on factors such as pump duration, pump spectrum, pump intensity, doping concentration, and operating temperature. We put special emphasis on finding ways to achieve high efficiency within the practical limitations imposed by real-world laser systems, such as limited pump brightness and limited damage fluence. We show that a particularly advantageous way of improving efficiency within those constraints is operation at cryogenic temperature. Based on the numerical findings we present a concept for a scalable amplifier based on an end-pumped, cryogenic, gas-cooled multi-slab architecture.

  9. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-12-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  10. On the Computing Potential of Intracellular Vesicles

    PubMed Central

    Mayne, Richard; Adamatzky, Andrew

    2015-01-01

    Collision-based computing (CBC) is a form of unconventional computing in which travelling localisations represent data and conditional routing of signals determines the output state; collisions between localisations represent logical operations. We investigated patterns of Ca2+-containing vesicle distribution within a live organism, slime mould Physarum polycephalum, with confocal microscopy and observed them colliding regularly. Vesicles travel down cytoskeletal ‘circuitry’ and their collisions may result in reflection, fusion or annihilation. We demonstrate through experimental observations that naturally-occurring vesicle dynamics may be characterised as a computationally-universal set of Boolean logical operations and present a ‘vesicle modification’ of the archetypal CBC ‘billiard ball model’ of computation. We proceed to discuss the viability of intracellular vesicles as an unconventional computing substrate in which we delineate practical considerations for reliable vesicle ‘programming’ in both in vivo and in vitro vesicle computing architectures and present optimised designs for both single logical gates and combinatorial logic circuits based on cytoskeletal network conformations. The results presented here demonstrate the first characterisation of intracelluar phenomena as collision-based computing and hence the viability of biological substrates for computing. PMID:26431435

  11. Optimised design for a 1 kJ diode-pumped solid-state laser system

    NASA Astrophysics Data System (ADS)

    Mason, Paul D.; Ertel, Klaus; Banerjee, Saumyabrata; Phillips, P. Jonathan; Hernandez-Gomez, Cristina; Collier, John L.

    2011-06-01

    A conceptual design for a kJ-class diode-pumped solid-state laser (DPSSL) system based on cryogenic gas-cooled multislab ceramic Yb:YAG amplifier technology has been developed at the STFC as a building block towards a MJ-class source for inertial fusion energy (IFE) projects such as HiPER. In this paper, we present an overview of an amplifier design optimised for efficient generation of 1 kJ nanosecond pulses at 10 Hz repetition rate. In order to confirm the viability of this technology, a prototype version of this amplifier scaled to deliver 10 J at 10 Hz, DiPOLE, is under development at the Central Laser Facility. A progress update on the status of this system is also presented.

  12. Energy efficiency in membrane bioreactors.

    PubMed

    Barillon, B; Martin Ruel, S; Langlais, C; Lazarova, V

    2013-01-01

    Energy consumption remains the key factor for the optimisation of the performance of membrane bioreactors (MBRs). This paper presents the results of the detailed energy audits of six full-scale MBRs operated by Suez Environnement in France, Spain and the USA based on on-site energy measurement and analysis of plant operation parameters and treatment performance. Specific energy consumption is compared for two different MBR configurations (flat sheet and hollow fibre membranes) and for plants with different design, loads and operation parameters. The aim of this project was to understand how the energy is consumed in MBR facilities and under which operating conditions, in order to finally provide guidelines and recommended practices for optimisation of MBR operation and design to reduce energy consumption and environmental impacts.

  13. 3D Reconstruction of human bones based on dictionary learning.

    PubMed

    Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin

    2017-11-01

    An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.

  15. Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.

    PubMed

    Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin

    2014-02-01

    Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Fast and fuzzy multi-objective radiotherapy treatment plan generation for head and neck cancer patients with the lexicographic reference point method (LRPM)

    NASA Astrophysics Data System (ADS)

    van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan

    2017-06-01

    Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.

  17. Optimisation of solar synoptic observations

    NASA Astrophysics Data System (ADS)

    Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal

    2012-09-01

    The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.

  18. Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.

    PubMed

    Kramar, A; Turk, S; Vrecer, F

    2003-04-30

    The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.

  19. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  20. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  1. Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.

    PubMed

    Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella

    2014-11-03

    Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.

  2. The effect of resistance level and stability demands on recruitment patterns and internal loading of spine in dynamic flexion and extension using a simple trunk model.

    PubMed

    Zeinali-Davarani, Shahrokh; Shirazi-Adl, Aboulfazl; Dariush, Behzad; Hemami, Hooshang; Parnianpour, Mohamad

    2011-07-01

    The effects of external resistance on the recruitment of trunk muscles in sagittal movements and the coactivation mechanism to maintain spinal stability were investigated using a simple computational model of iso-resistive spine sagittal movements. Neural excitation of muscles was attained based on inverse dynamics approach along with a stability-based optimisation. The trunk flexion and extension movements between 60° flexion and the upright posture against various resistance levels were simulated. Incorporation of the stability constraint in the optimisation algorithm required higher antagonistic activities for all resistance levels mostly close to the upright position. Extension movements showed higher coactivation with higher resistance, whereas flexion movements demonstrated lower coactivation indicating a greater stability demand in backward extension movements against higher resistance at the neighbourhood of the upright posture. Optimal extension profiles based on minimum jerk, work and power had distinct kinematics profiles which led to recruitment patterns with different timing and amplitude of activation.

  3. Predicting emergency coronary artery bypass graft following PCI: application of a computational model to refer patients to hospitals with and without onsite surgical backup

    PubMed Central

    Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S

    2015-01-01

    Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738

  4. Faceting for direction-dependent spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.

    2018-04-01

    The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.

  5. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  6. Neural network feedforward control of a closed-circuit wind tunnel

    NASA Astrophysics Data System (ADS)

    Sutcliffe, Peter

    Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the derived NNMPC controller provided an excellent level of control over the test-section temperature in adherence to a reference trajectory even when faced with unforeseen disturbances such as rapid changes in the operating environment.

  7. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  8. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    PubMed

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  9. A simulation-optimization model for effective water resources management in the coastal zone

    NASA Astrophysics Data System (ADS)

    Spanoudaki, Katerina; Kampanis, Nikolaos

    2015-04-01

    Coastal areas are the most densely-populated areas in the world. Consequently water demand is high, posing great pressure on fresh water resources. Climatic change and its direct impacts on meteorological variables (e.g. precipitation) and indirect impact on sea level rise, as well as anthropogenic pressures (e.g. groundwater abstraction), are strong drivers causing groundwater salinisation and subsequently affecting coastal wetlands salinity with adverse effects on the corresponding ecosystems. Coastal zones are a difficult hydrologic environment to represent with a mathematical model due to the large number of contributing hydrologic processes and variable-density flow conditions. Simulation of sea level rise and tidal effects on aquifer salinisation and accurate prediction of interactions between coastal waters, groundwater and neighbouring wetlands requires the use of integrated surface water-groundwater mathematical models. In the past few decades several computer codes have been developed to simulate coupled surface and groundwater flow. However, most integrated surface water-groundwater models are based on the assumption of constant fluid density and therefore their applicability to coastal regions is questionable. Thus, most of the existing codes are not well-suited to represent surface water-groundwater interactions in coastal areas. To this end, the 3D integrated surface water-groundwater model IRENE (Spanoudaki et al., 2009; Spanoudaki, 2010) has been modified in order to simulate surface water-groundwater flow and salinity interactions in the coastal zone. IRENE, in its original form, couples the 3D shallow water equations to the equations describing 3D saturated groundwater flow of constant density. A semi-implicit finite difference scheme is used to solve the surface water flow equations, while a fully implicit finite difference scheme is used for the groundwater equations. Pollution interactions are simulated by coupling the advection-diffusion equation describing the fate and transport of contaminants introduced in a 3D turbulent flow field to the partial differential equation describing the fate and transport of contaminants in 3D transient groundwater flow systems. The model has been further developed to include the effects of density variations on surface water and groundwater flow, while the already built-in solute transport capabilities are used to simulate salinity interactions. The refined model is based on the finite volume method using a cell-centred structured grid, providing thus flexibility and accuracy in simulating irregular boundary geometries. For addressing water resources management problems, simulation models are usually externally coupled with optimisation-based management models. However this usually requires a very large number of iterations between the optimisation and simulation models in order to obtain the optimal management solution. As an alternative approach, for improved computational efficiency, an Artificial Neural Network (ANN) is trained as an approximate simulator of IRENE. The trained ANN is then linked to a Genetic Algorithm (GA) based optimisation model for managing salinisation problems in the coastal zone. The linked simulation-optimisation model is applied to a hypothetical study area for performance evaluation. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). References Spanoudaki, K., Stamou, A.I. and Nanou-Giannarou, A. (2009). Development and verification of a 3-D integrated surface water-groundwater model. Journal of Hydrology, 375 (3-4), 410-427. Spanoudaki, K. (2010). Integrated numerical modelling of surface water groundwater systems (in Greek). Ph.D. Thesis, National Technical University of Athens, Greece.

  10. Cost evaluation to optimise radiation therapy implementation in different income settings: A time-driven activity-based analysis.

    PubMed

    Van Dyk, Jacob; Zubizarreta, Eduardo; Lievens, Yolande

    2017-11-01

    With increasing recognition of growing cancer incidence globally, efficient means of expanding radiotherapy capacity is imperative, and understanding the factors impacting human and financial needs is valuable. A time-driven activity-based costing analysis was performed, using a base case of 2-machine departments, with defined cost inputs and operating parameters. Four income groups were analysed, ranging from low to high income. Scenario analyses included department size, operating hours, fractionation, treatment complexity, efficiency, and centralised versus decentralised care. The base case cost/course is US$5,368 in HICs, US$2,028 in LICs; the annual operating cost is US$4,595,000 and US$1,736,000, respectively. Economies of scale show cost/course decreasing with increasing department size, mainly related to the equipment cost and most prominent up to 3 linacs. The cost in HICs is two or three times as high as in U-MICs or LICs, respectively. Decreasing operating hours below 8h/day has a dramatic impact on the cost/course. IMRT increases the cost/course by 22%. Centralising preparatory activities has a moderate impact on the costs. The results indicate trends that are useful for optimising local and regional circumstances. This methodology can provide input into a uniform and accepted approach to evaluating the cost of radiotherapy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  11. High power, 1060-nm diode laser with an asymmetric hetero-waveguide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T; Zhang, Yu; Hao, E

    2015-07-31

    By introducing an asymmetric hetero-waveguide into the epitaxial structure of a diode laser, a 6.21-W output is achieved at a wavelength of 1060 nm. A different design in p- and n-confinement, based on optimisation of energy bands, is used to reduce voltage loss and meet the requirement of high power and high wall-plug efficiency. A 1060-nm diode laser with a single quantum well and asymmetric hetero-structure waveguide is fabricated and analysed. Measurement results show that the asymmetric hetero-structure waveguide can be efficiently used for reducing voltage loss and improving the confinement of injection carriers and wall-plug efficiency. (lasers)

  12. A bi-population based scheme for an explicit exploration/exploitation trade-off in dynamic environments

    NASA Astrophysics Data System (ADS)

    Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique

    2017-05-01

    Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.

  13. Quadratic Optimisation with One Quadratic Equality Constraint

    DTIC Science & Technology

    2010-06-01

    This report presents a theoretical framework for minimising a quadratic objective function subject to a quadratic equality constraint. The first part of the report gives a detailed algorithm which computes the global minimiser without calling special nonlinear optimisation solvers. The second part of the report shows how the developed theory can be applied to solve the time of arrival geolocation problem.

  14. Optimisation of flavour ester biosynthesis in an aqueous system of coconut cream and fusel oil catalysed by lipase.

    PubMed

    Sun, Jingcan; Yu, Bin; Curran, Philip; Liu, Shao-Quan

    2012-12-15

    Coconut cream and fusel oil, two low-cost natural substances, were used as starting materials for the biosynthesis of flavour-active octanoic acid esters (ethyl-, butyl-, isobutyl- and (iso)amyl octanoate) using lipase Palatase as the biocatalyst. The Taguchi design method was used for the first time to optimize the biosynthesis of esters by a lipase in an aqueous system of coconut cream and fusel oil. Temperature, time and enzyme amount were found to be statistically significant factors and the optimal conditions were determined to be as follows: temperature 30°C, fusel oil concentration 9% (v/w), reaction time 24h, pH 6.2 and enzyme amount 0.26 g. Under the optimised conditions, a yield of 14.25mg/g (based on cream weight) and signal-to-noise (S/N) ratio of 23.07 dB were obtained. The results indicate that the Taguchi design method was an efficient and systematic approach to the optimisation of lipase-catalysed biological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Optimising mobile phase composition, its flow-rate and column temperature in HPLC using taboo search.

    PubMed

    Guillaume, Y C; Peyrin, E

    2000-03-06

    A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.

  16. Development and Analysis of New Integrated Energy Systems for Sustainable Buildings

    NASA Astrophysics Data System (ADS)

    Khalid, Farrukh

    Excessive consumption of fossil fuels in the residential sector and their associated negative environmental impacts bring a significant challenge to engineers within research and industrial communities throughout the world to develop more environmentally benign methods of meeting energy needs of residential sector in particular. This thesis addresses potential solutions for the issue of fossils fuel consumption in residential buildings. Three novel renewable energy based multigeneration systems are proposed for different types of residential buildings, and a comprehensive assessment of energetic and exergetic performances is given on the basis of total occupancy, energy load, and climate conditions. System 1 is a multigeneration system based on two renewable energy sources. It uses biomass and solar resources. The outputs of System 1 are electricity, space heating, cooling, and hot water. The energy and exergy efficiencies of System 1 are 91.0% and 34.9%, respectively. The results of the optimisation analysis show that the net present cost of System 1 is 2,700,496 and that the levelised cost of electricity is 0.117/kWh. System 2 is a multigeneration system, integrating three renewable energy based subsystems; wind turbine, concentrated solar collector, and Organic Rankine Cycle supplied by a ground source heat exchanger. The outputs of the System 2 are electricity, hot water, heating and cooling. The optimisation analysis shows that net present cost is 35,502 and levelised cost of electricity is 0.186/kWh. The energy and exergy efficiencies of System 2 are found to be 34.6% and 16.2%, respectively. System 3 is a multigeneration system, comprising two renewable energy subsystems-- geothermal and solar to supply power, cooling, heating, and hot water. The optimisation analysis shows that the net present cost of System 3 is 598,474, and levelised cost of electricity of 0.111/kWh. The energy and exergy efficiencies of System 3 are 20.2% and 19.2%, respectively, with outputs of electricity, hot water, cooling and space heating. A performance assessment for identical conditions indicates that System 3 offers the best performance, with the minimum net present cost of 26,001 and levelised cost of electricity of 0.136/kWh.

  17. A chromatographic objective function to characterise chromatograms with unknown compounds or without standards available.

    PubMed

    Alvarez-Segura, T; Gómez-Díaz, A; Ortiz-Bolsico, C; Torres-Lapasió, J R; García-Alvarez-Coque, M C

    2015-08-28

    Getting useful chemical information from samples containing many compounds is still a challenge to analysts in liquid chromatography. The highest complexity corresponds to samples for which there is no prior knowledge about their chemical composition. Computer-based methodologies are currently considered as the most efficient tools to optimise the chromatographic resolution, and further finding the optimal separation conditions. However, most chromatographic objective functions (COFs) described in the literature to measure the resolution are based on mathematical models fitted with the information obtained from standards, and cannot be applied to samples with unknown compounds. In this work, a new COF based on the automatic measurement of the protruding part of the chromatographic peaks (or peak prominences) that indicates the number of perceptible peaks and global resolution, without the need of standards, is developed. The proposed COF was found satisfactory with regard to the peak purity criterion when applied to artificial peaks and simulated chromatograms of mixtures built using the information of standards. The approach was applied to mixtures of drugs containing unknown impurities and degradation products and to extracts of medicinal herbs, eluted with acetonitrile-water mixtures using isocratic and gradient elution. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Development of the protocol for purification of artemisinin based on combination of commercial and computationally designed adsorbents.

    PubMed

    Piletska, Elena V; Karim, Kal; Cutler, Malcolm; Piletsky, Sergey A

    2013-01-01

    A polymeric adsorbent for extraction of the antimalarial drug artemisinin from Artemisia annua L. was computationally designed. This polymer demonstrated a high capacity for artemisinin (120 mg g(-1) ), quantitative recovery (87%) and was found to be an effective material for purification of artemisinin from complex plant matrix. The artemisinin quantification was conducted using an optimised HPLC-MS protocol, which was characterised by high precision and linearity in the concentration range between 0.05 and 2 μg mL(-1) . Optimisation of the purification protocol also involved screening of commercial adsorbents for the removal of waxes and other interfering natural compounds, which inhibit the crystallisation of artemisinin. As a result of a two step-purification protocol crystals of artemisinin were obtained, and artemisinin purity was evaluated as 75%. By performing the second stage of purification twice, the purity of artemisinin can be further improved to 99%. The developed protocol produced high-purity artemisinin using only a few purification steps that makes it suitable for large scale industrial manufacturing process. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Statistical alignment: computational properties, homology testing and goodness-of-fit.

    PubMed

    Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G

    2000-09-08

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.

  20. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  1. Enhanced inter-subject brain computer interface with associative sensorimotor oscillations.

    PubMed

    Saha, Simanto; Ahmed, Khawza I; Mostafa, Raqibul; Khandoker, Ahsan H; Hadjileontiadis, Leontios

    2017-02-01

    Electroencephalography (EEG) captures electrophysiological signatures of cortical events from the scalp with high-dimensional electrode montages. Usually, excessive sources produce outliers and potentially affect the actual event related sources. Besides, EEG manifests inherent inter-subject variability of the brain dynamics, at the resting state and/or under the performance of task(s), caused probably due to the instantaneous fluctuation of psychophysiological states. A wavelet coherence (WC) analysis for optimally selecting associative inter-subject channels is proposed here and is being used to boost performances of motor imagery (MI)-based inter-subject brain computer interface (BCI). The underlying hypothesis is that optimally associative inter-subject channels can reduce the effects of outliers and, thus, eliminate dissimilar cortical patterns. The proposed approach has been tested on the dataset IVa from BCI competition III, including EEG data acquired from five healthy subjects who were given visual cues to perform 280 trials of MI for the right hand and right foot. Experimental results have shown increased classification accuracy (81.79%) using the WC-based selected 16 channels compared to the one (56.79%) achieved using all the available 118 channels. The associative channels lie mostly around the sensorimotor regions of the brain, reinforced by the previous literature, describing spatial brain dynamics during sensorimotor oscillations. Apparently, the proposed approach paves the way for optimised EEG channel selection that could boost further the efficiency and real-time performance of BCI systems.

  2. AlphaMate: a program for optimising selection, maintenance of diversity, and mate allocation in breeding programs.

    PubMed

    Gorjanc, Gregor; Hickey, John M

    2018-05-02

    AlphaMate is a flexible program that optimises selection, maintenance of genetic diversity, and mate allocation in breeding programs. It can be used in animal and cross- and self-pollinating plant populations. These populations can be subject to selective breeding or conservation management. The problem is formulated as a multi-objective optimisation of a valid mating plan that is solved with an evolutionary algorithm. A valid mating plan is defined by a combination of mating constraints (the number of matings, the maximal number of parents, the minimal/equal/maximal number of contributions per parent, or allowance for selfing) that are gender specific or generic. The optimisation can maximize genetic gain, minimize group coancestry, minimize inbreeding of individual matings, or maximize genetic gain for a given increase in group coancestry or inbreeding. Users provide a list of candidate individuals with associated gender and selection criteria information (if applicable) and coancestry matrix. Selection criteria and coancestry matrix can be based on pedigree or genome-wide markers. Additional individual or mating specific information can be included to enrich optimisation objectives. An example of rapid recurrent genomic selection in wheat demonstrates how AlphaMate can double the efficiency of converting genetic diversity into genetic gain compared to truncation selection. Another example demonstrates the use of genome editing to expand the gain-diversity frontier. Executable versions of AlphaMate for Windows, Mac, and Linux platforms are available at http://www.AlphaGenes.roslin.ed.ac.uk/AlphaMate. gregor.gorjanc@roslin.ed.ack.uk.

  3. Data-driven train set crash dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2017-02-01

    Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.

  4. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  5. Assessing the lipophilicity of fragments and early hits

    NASA Astrophysics Data System (ADS)

    Mortenson, Paul N.; Murray, Christopher W.

    2011-07-01

    A key challenge in many drug discovery programs is to accurately assess the potential value of screening hits. This is particularly true in fragment-based drug design (FBDD), where the hits often bind relatively weakly, but are correspondingly small. Ligand efficiency (LE) considers both the potency and the size of the molecule, and enables us to estimate whether or not an initial hit is likely to be optimisable to a potent, druglike lead. While size is a key property that needs to be controlled in a small molecule drug, there are a number of additional properties that should also be considered. Lipophilicity is amongst the most important of these additional properties, and here we present a new efficiency index (LLEAT) that combines lipophilicity, size and potency. The index is intuitively defined, and has been designed to have the same target value and dynamic range as LE, making it easily interpretable by medicinal chemists. Monitoring both LE and LLEAT should help both in the selection of more promising fragment hits, and controlling molecular weight and lipophilicity during optimisation.

  6. Advanced data management for optimising the operation of a full-scale WWTP.

    PubMed

    Beltrán, Sergio; Maiza, Mikel; de la Sota, Alejandro; Villanueva, José María; Ayesa, Eduardo

    2012-01-01

    The lack of appropriate data management tools is presently a limiting factor for a broader implementation and a more efficient use of sensors and analysers, monitoring systems and process controllers in wastewater treatment plants (WWTPs). This paper presents a technical solution for advanced data management of a full-scale WWTP. The solution is based on an efficient and intelligent use of the plant data by a standard centralisation of the heterogeneous data acquired from different sources, effective data processing to extract adequate information, and a straightforward connection to other emerging tools focused on the operational optimisation of the plant such as advanced monitoring and control or dynamic simulators. A pilot study of the advanced data manager tool was designed and implemented in the Galindo-Bilbao WWTP. The results of the pilot study showed its potential for agile and intelligent plant data management by generating new enriched information combining data from different plant sources, facilitating the connection of operational support systems, and developing automatic plots and trends of simulated results and actual data for plant performance and diagnosis.

  7. Design and experimental analysis of a new malleovestibulopexy prosthesis using a finite element model of the human middle ear.

    PubMed

    Vallejo Valdezate, Luis A; Hidalgo Otamendi, Antonio; Hernández, Alberto; Lobo, Fernando; Gil-Carcedo Sañudo, Elisa; Gil-Carcedo García, Luis M

    2015-01-01

    Many designs of prostheses are available for middle ear surgery. In this study we propose a design for a new prosthesis, which optimises mechanical performance in the human middle ear and improves some deficiencies in the prostheses currently available. Our objective was to design and assess the theoretical acoustic-mechanical behaviour of this new total ossicular replacement prosthesis. The design of this new prosthesis was based on an animal model (an iguana). For the modelling and mechanical analysis of the new prosthesis, we used a dynamic 3D computer model of the human middle ear, based on the finite elements method (FEM). The new malleovestibulopexy prosthesis design demonstrates an acoustical-mechanical performance similar to that of the healthy human middle ear. This new design also has additional advantages, such as ease of implantation and stability in the middle ear. This study shows that computer simulation can be used to design and optimise the vibroacoustic characteristics of middle ear implants and demonstrates the effectiveness of a new malleovestibulopexy prosthesis in reconstructing the ossicular chain. Copyright © 2014 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  8. Access Protocol For An Industrial Optical Fibre LAN

    NASA Astrophysics Data System (ADS)

    Senior, John M.; Walker, William M.; Ryley, Alan

    1987-09-01

    A structure for OSI levels 1 and 2 of a local area network suitable for use in a variety of industrial environments is reported. It is intended that the LAN will utilise optical fibre technology at the physical level and a hybrid of dynamically optimisable token passing and CSMA/CD techniques at the data link (IEEE 802 medium access control - logical link control) level. An intelligent token passing algorithm is employed which dynamically allocates tokens according to the known upper limits on the requirements of each device. In addition a system of stochastic tokens is used to increase efficiency when the stochastic traffic is significant. The protocol also allows user-defined priority systems to be employed and is suitable for distributed or centralised implementation. The results of computer simulated performance characteristics for the protocol using a star-ring topology are reported which demonstrate its ability to perform efficiently with the device and traffic loads anticipated within an industrial environment.

  9. Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist

    PubMed Central

    Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N

    2012-01-01

    Purpose: The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Materials and Methods: Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 23 full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Results: Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug–excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. Conclusion: It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease. PMID:23580933

  10. Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist.

    PubMed

    Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N

    2012-10-01

    The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 2(3) full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug-excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less

  12. Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.

    PubMed

    Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir

    2014-01-01

    Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Multi-Scale Peak and Trough Detection Optimised for Periodic and Quasi-Periodic Neuroscience Data.

    PubMed

    Bishop, Steven M; Ercole, Ari

    2018-01-01

    The reliable detection of peaks and troughs in physiological signals is essential to many investigative techniques in medicine and computational biology. Analysis of the intracranial pressure (ICP) waveform is a particular challenge due to multi-scale features, a changing morphology over time and signal-to-noise limitations. Here we present an efficient peak and trough detection algorithm that extends the scalogram approach of Scholkmann et al., and results in greatly improved algorithm runtime performance. Our improved algorithm (modified Scholkmann) was developed and analysed in MATLAB R2015b. Synthesised waveforms (periodic, quasi-periodic and chirp sinusoids) were degraded with white Gaussian noise to achieve signal-to-noise ratios down to 5 dB and were used to compare the performance of the original Scholkmann and modified Scholkmann algorithms. The modified Scholkmann algorithm has false-positive (0%) and false-negative (0%) detection rates identical to the original Scholkmann when applied to our test suite. Actual compute time for a 200-run Monte Carlo simulation over a multicomponent noisy test signal was 40.96 ± 0.020 s (mean ± 95%CI) for the original Scholkmann and 1.81 ± 0.003 s (mean ± 95%CI) for the modified Scholkmann, demonstrating the expected improvement in runtime complexity from [Formula: see text] to [Formula: see text]. The accurate interpretation of waveform data to identify peaks and troughs is crucial in signal parameterisation, feature extraction and waveform identification tasks. Modification of a standard scalogram technique has produced a robust algorithm with linear computational complexity that is particularly suited to the challenges presented by large, noisy physiological datasets. The algorithm is optimised through a single parameter and can identify sub-waveform features with minimal additional overhead, and is easily adapted to run in real time on commodity hardware.

  14. A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications

    NASA Astrophysics Data System (ADS)

    Robin, J.; Tanter, M.; Pernot, M.

    2017-09-01

    Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.

  15. Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.

    PubMed

    Daunizeau, J; Friston, K J; Kiebel, S J

    2009-11-01

    In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.

  16. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  17. INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?

    PubMed

    Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P

    2015-01-01

    Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.

  18. A novel specimen-specific methodology to optimise the alignment of long bones for experimental testing.

    PubMed

    Cheong, Vee San; Bull, Anthony M J

    2015-12-16

    The choice of coordinate system and alignment of bone will affect the quantification of mechanical properties obtained during in-vitro biomechanical testing. Where these are used in predictive models, such as finite element analysis, the fidelic description of these properties is paramount. Currently in bending and torsional tests, bones are aligned on a pre-defined fixed span based on the reference system marked out. However, large inter-specimen differences have been reported. This suggests a need for the development of a specimen-specific alignment system for use in experimental work. Eleven ovine tibiae were used in this study and three-dimensional surface meshes were constructed from micro-Computed Tomography scan images. A novel, semi-automated algorithm was developed and applied to the surface meshes to align the whole bone based on its calculated principal directions. Thereafter, the code isolates the optimised location and length of each bone for experimental testing. This resulted in a lowering of the second moment of area about the chosen bending axis in the central region. More importantly, the optimisation method decreases the irregularity of the shape of the cross-sectional slices as the unbiased estimate of the population coefficient of variation of the second moment of area decreased from a range of (0.210-0.435) to (0.145-0.317) in the longitudinal direction, indicating a minimisation of the product moment, which causes eccentric loading. Thus, this methodology serves as an important pre-step to align the bone for mechanical tests or simulation work, is optimised for each specimen, ensures repeatability, and is general enough to be applied to any long bone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Basis for the development of sustainable optimisation indicators for activated sludge wastewater treatment plants in the Republic of Ireland.

    PubMed

    Gordon, G T; McCann, B P

    2015-01-01

    This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.

  20. Developing Ubiquitous Sensor Network Platform Using Internet of Things: Application in Precision Agriculture.

    PubMed

    Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José

    2016-07-22

    The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched.

  1. Developing Ubiquitous Sensor Network Platform Using Internet of Things: Application in Precision Agriculture

    PubMed Central

    Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José

    2016-01-01

    The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched. PMID:27455265

  2. Optimisation of Ferrochrome Addition Using Multi-Objective Evolutionary and Genetic Algorithms for Stainless Steel Making via AOD Converter

    NASA Astrophysics Data System (ADS)

    Behera, Kishore Kumar; Pal, Snehanshu

    2018-03-01

    This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.

  3. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  4. An efficient approach for improving virtual machine placement in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  5. Preliminary analysis of compound systems based on high temperature fuel cell, gas turbine and Organic Rankine Cycle

    NASA Astrophysics Data System (ADS)

    Sánchez, D.; Muñoz de Escalona, J. M.; Monje, B.; Chacartegui, R.; Sánchez, T.

    This article presents a novel proposal for complex hybrid systems comprising high temperature fuel cells and thermal engines. In this case, the system is composed by a molten carbonate fuel cell with cascaded hot air turbine and Organic Rankine Cycle (ORC), a layout that is based on subsequent waste heat recovery for additional power production. The work will credit that it is possible to achieve 60% efficiency even if the fuel cell operates at atmospheric pressure. The first part of the analysis focuses on selecting the working fluid of the Organic Rankine Cycle. After a thermodynamic optimisation, toluene turns out to be the most efficient fluid in terms of cycle performance. However, it is also detected that the performance of the heat recovery vapour generator is equally important, what makes R245fa be the most interesting fluid due to its balanced thermal and HRVG efficiencies that yield the highest global bottoming cycle efficiency. When this fluid is employed in the compound system, conservative operating conditions permit achieving 60% global system efficiency, therefore accomplishing the initial objective set up in the work. A simultaneous optimisation of gas turbine (pressure ratio) and ORC (live vapour pressure) is then presented, to check if the previous results are improved or if the fluid of choice must be replaced. Eventually, even if system performance improves for some fluids, it is concluded that (i) R245fa is the most efficient fluid and (ii) the operating conditions considered in the previous analysis are still valid. The work concludes with an assessment about safety-related aspects of using hydrocarbons in the system. Flammability is studied, showing that R245fa is the most interesting fluid also in this regard due to its inert behaviour, as opposed to the other fluids under consideration all of which are highly flammable.

  6. Calibration of phoswich-based lung counting system using realistic chest phantom.

    PubMed

    Manohari, M; Mathiyarasu, R; Rajagopal, V; Meenakshisundaram, V; Indira, R

    2011-03-01

    A phoswich detector, housed inside a low background steel room, coupled with a state-of-art pulse shape discrimination (PSD) electronics is recently established at Radiological Safety Division of IGCAR for in vivo monitoring of actinides. The various parameters of PSD electronics were optimised to achieve efficient background reduction in low-energy regions. The PSD with optimised parameters has reduced steel room background from 9.5 to 0.28 cps in the 17 keV region and 5.8 to 0.3 cps in the 60 keV region. The Figure of Merit for the timing spectrum of the system is 3.0. The true signal loss due to PSD was found to be less than 2 %. The phoswich system was calibrated with Lawrence Livermore National Laboratory realistic chest phantom loaded with (241)Am activity tagged lung set. Calibration factors for varying chest wall composition and chest wall thickness in terms of muscle equivalent chest wall thickness were established. (241)Am activity in the JAERI phantom which was received as a part of IAEA inter-comparison exercise was estimated. This paper presents the optimisation of PSD electronics and the salient results of the calibration.

  7. Application of the adjoint optimisation of shock control bump for ONERA-M6 wing

    NASA Astrophysics Data System (ADS)

    Nejati, A.; Mazaheri, K.

    2017-11-01

    This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knight, Stephen P, E-mail: stephen.knight@health.qld.gov.au

    The aim of this review was to develop a radiographic optimisation strategy to make use of digital radiography (DR) and needle phosphor computerised radiography (CR) detectors, in order to lower radiation dose and improve image quality for paediatrics. This review was based on evidence-based practice, of which a component was a review of the relevant literature. The resulting exposure chart was developed with two distinct groups of exposure optimisation strategies – body exposures (for head, trunk, humerus, femur) and distal extremity exposures (elbow to finger, knee to toe). Exposure variables manipulated included kilovoltage peak (kVp), target detector exposure and milli-ampere-secondsmore » (mAs), automatic exposure control (AEC), additional beam filtration, and use of antiscatter grid. Mean dose area product (DAP) reductions of up to 83% for anterior–posterior (AP)/posterior–anterior (PA) abdomen projections were recorded postoptimisation due to manipulation of multiple-exposure variables. For body exposures, the target EI and detector exposure, and thus the required mAs were typically 20% less postoptimisation. Image quality for some distal extremity exposures was improved by lowering kVp and increasing mAs around constant entrance skin dose. It is recommended that purchasing digital X-ray equipment with high detective quantum efficiency detectors, and then optimising the exposure chart for use with these detectors is of high importance for sites performing paediatric imaging. Multiple-exposure variables may need to be manipulated to achieve optimal outcomes.« less

  9. Analysis and optimisation of a mixed fluid cascade (MFC) process

    NASA Astrophysics Data System (ADS)

    Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng

    2017-04-01

    A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.

  10. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  11. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  12. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  13. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE PAGES

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...

    2017-04-24

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  14. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    NASA Astrophysics Data System (ADS)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  15. VLSI Technology for Cognitive Radio

    NASA Astrophysics Data System (ADS)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  16. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    PubMed

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  17. Scale-up and economic analysis of biodiesel production from municipal primary sewage sludge.

    PubMed

    Olkiewicz, Magdalena; Torres, Carmen M; Jiménez, Laureano; Font, Josep; Bengoa, Christophe

    2016-08-01

    Municipal wastewater sludge is a promising lipid feedstock for biodiesel production, but the need to eliminate the high water content before lipid extraction is the main limitation for scaling up. This study evaluates the economic feasibility of biodiesel production directly from liquid primary sludge based on experimental data at laboratory scale. Computational tools were used for the modelling of the process scale-up and the different configurations of lipid extraction to optimise this step, as it is the most expensive. The operational variables with a major influence in the cost were the extraction time and the amount of solvent. The optimised extraction process had a break-even price of biodiesel of 1232 $/t, being economically competitive with the current cost of fossil diesel. The proposed biodiesel production process from waste sludge eliminates the expensive step of sludge drying, lowering the biodiesel price. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Material model of pelvic bone based on modal analysis: a study on the composite bone.

    PubMed

    Henyš, Petr; Čapek, Lukáš

    2017-02-01

    Digital models based on finite element (FE) analysis are widely used in orthopaedics to predict the stress or strain in the bone due to bone-implant interaction. The usability of the model depends strongly on the bone material description. The material model that is most commonly used is based on a constant Young's modulus or on the apparent density of bone obtained from computer tomography (CT) data. The Young's modulus of bone is described in many experimental works with large variations in the results. The concept of measuring and validating the material model of the pelvic bone based on modal analysis is introduced in this pilot study. The modal frequencies, damping, and shapes of the composite bone were measured precisely by an impact hammer at 239 points. An FE model was built using the data pertaining to the geometry and apparent density obtained from the CT of the composite bone. The isotropic homogeneous Young's modulus and Poisson's ratio of the cortical and trabecular bone were estimated from the optimisation procedure including Gaussian statistical properties. The performance of the updated model was investigated through the sensitivity analysis of the natural frequencies with respect to the material parameters. The maximal error between the numerical and experimental natural frequencies of the bone reached 1.74 % in the first modal shape. Finally, the optimised parameters were matched with the data sheets of the composite bone. The maximal difference between the calibrated material properties and that obtained from the data sheet was 34 %. The optimisation scheme of the FE model based on the modal analysis data provides extremely useful calibration of the FE models with the uncertainty bounds and without the influence of the boundary conditions.

  19. Design Optimisation of a Magnetic Field Based Soft Tactile Sensor

    PubMed Central

    Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert

    2017-01-01

    This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787

  20. Controlled hydrostatic pressure stress downregulates the expression of ribosomal genes in preimplantation embryos: a possible protection mechanism?

    PubMed

    Bock, I; Raveh-Amit, H; Losonczi, E; Carstea, A C; Feher, A; Mashayekhi, K; Matyas, S; Dinnyes, A; Pribenszky, C

    2016-04-01

    The efficiency of various assisted reproductive techniques can be improved by preconditioning the gametes and embryos with sublethal hydrostatic pressure treatment. However, the underlying molecular mechanism responsible for this protective effect remains unknown and requires further investigation. Here, we studied the effect of optimised hydrostatic pressure treatment on the global gene expression of mouse oocytes after embryonic genome activation. Based on a gene expression microarray analysis, a significant effect of treatment was observed in 4-cell embryos derived from treated oocytes, revealing a transcriptional footprint of hydrostatic pressure-affected genes. Functional analysis identified numerous genes involved in protein synthesis that were downregulated in 4-cell embryos in response to hydrostatic pressure treatment, suggesting that regulation of translation has a major role in optimised hydrostatic pressure-induced stress tolerance. We present a comprehensive microarray analysis and further delineate a potential mechanism responsible for the protective effect of hydrostatic pressure treatment.

  1. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    NASA Astrophysics Data System (ADS)

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  2. Evaluation of efficacy of metal artefact reduction technique using contrast media in Computed Tomography

    NASA Astrophysics Data System (ADS)

    Yusob, Diana; Zukhi, Jihan; Aziz Tajuddin, Abd; Zainon, Rafidah

    2017-05-01

    The aim of this study was to evaluate the efficacy of metal artefact reduction using contrasts media in Computed Tomography (CT) imaging. A water-based abdomen phantom of diameter 32 cm (adult body size) was fabricated using polymethyl methacrylate (PMMA) material. Three different contrast agents (iodine, barium and gadolinium) were filled in small PMMA tubes and placed inside a water-based PMMA adult abdomen phantom. The orthopedic metal screw was placed in each small PMMA tube separately. These two types of orthopedic metal screw (stainless steel and titanium alloy) were scanned separately. The orthopedic metal crews were scanned with single-energy CT at 120 kV and dual-energy CT at fast kV-switching between 80 kV and 140 kV. The scan modes were set automatically using the current modulation care4Dose setting and the scans were set at different pitch and slice thickness. The use of the contrast media technique on orthopedic metal screws were optimised by using pitch = 0.60 mm, and slice thickness = 5.0 mm. The use contrast media can reduce the metal streaking artefacts on CT image, enhance the CT images surrounding the implants, and it has potential use in improving diagnostic performance in patients with severe metallic artefacts. These results are valuable for imaging protocol optimisation in clinical applications.

  3. The Energy-Efficient Quarry: Towards improved understanding and optimisation of energy use and minimisation of CO2 generation in the aggregates industry.

    NASA Astrophysics Data System (ADS)

    Hill, Ian; White, Toby; Owen, Sarah

    2014-05-01

    Extraction and processing of rock materials to produce aggregates is carried out at some 20,000 quarries across the EU. All stages of the processing and transport of hard and dense materials inevitably consume high levels of energy and have consequent significant carbon footprints. The FP7 project "the Energy Efficient Quarry" (EE-Quarry) has been addressing this problem and has devised strategies, supported by modelling software, to assist the quarrying industry to assess and optimise its energy use, and to minimise its carbon footprint. Aggregate quarries across Europe vary enormously in the scale of the quarrying operations, the nature of the worked mineral, and the processing to produce a final market product. Nevertheless most quarries involve most or all of a series of essential stages; deposit assessment, drilling and blasting, loading and hauling, and crushing and screening. The process of determining the energy-efficiency of each stage is complex, but is broadly understood in principle and there are numerous sources of information and guidance available in the literature and on-line. More complex still is the interaction between each of these stages. For example, using a little more energy in blasting to increase fragmentation may save much greater energy in later crushing and screening, but also generate more fines material which is discarded as waste and the embedded energy in this material is lost. Thus the calculation of the embedded energy in the waste material becomes an input to the determination of the blasting strategy. Such feedback loops abound in the overall quarry optimisation. The project has involved research and demonstration operations at a number of quarries distributed across Europe carried out by all partners in the EE-Quarry project, working in collaboration with many of the major quarrying companies operating in the EU. The EE-Quarry project is developing a sophisticated modelling tool, the "EE-Quarry Model" available to the quarrying industry on a web-based platform. This tool guides quarry managers and operators through the complex, multi-layered, iterative, process of assessing the energy efficiency of their own quarry operation. They are able to evaluate the optimisation of the energy-efficiency of the overall quarry through examining both the individual stages of processing, and the interactions between them. The project is also developing on-line distance learning modules designed for Continuous Professional Development (CPD) activities for staff across the quarrying industry in the EU and beyond. The presentation will describe development of the model, and the format and scope of the resulting software tool and its user-support available to the quarrying industry.

  4. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  5. The solution of target assignment problem in command and control decision-making behaviour simulation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Huai, Wenqing; Wang, Shaodan

    2017-08-01

    C2 (command and control) has been understood to be a critical military component to meet an increasing demand for rapid information gathering and real-time decision-making in a dynamically changing battlefield environment. In this article, to improve a C2 behaviour model's reusability and interoperability, a behaviour modelling framework was proposed to specify a C2 model's internal modules and a set of interoperability interfaces based on the C-BML (coalition battle management language). WTA (weapon target assignment) is a typical C2 autonomous decision-making behaviour modelling problem. Different from most WTA problem descriptions, here sensors were considered to be available resources of detection and the relationship constraints between weapons and sensors were also taken into account, which brought it much closer to actual application. A modified differential evolution (MDE) algorithm was developed to solve this high-dimension optimisation problem and obtained an optimal assignment plan with high efficiency. In case study, we built a simulation system to validate the proposed C2 modelling framework and interoperability interface specification. Also, a new optimisation solution was used to solve the WTA problem efficiently and successfully.

  6. Vibration isolation design for periodically stiffened shells by the wave finite element method

    NASA Astrophysics Data System (ADS)

    Hong, Jie; He, Xueqing; Zhang, Dayi; Zhang, Bing; Ma, Yanhong

    2018-04-01

    Periodically stiffened shell structures are widely used due to their excellent specific strength, in particular for aeronautical and astronautical components. This paper presents an improved Wave Finite Element Method (FEM) that can be employed to predict the band-gap characteristics of stiffened shell structures efficiently. An aero-engine casing, which is a typical periodically stiffened shell structure, was employed to verify the validation and efficiency of the Wave FEM. Good agreement has been found between the Wave FEM and the classical FEM for different boundary conditions. One effective wave selection method based on the Wave FEM has thus been put forward to filter the radial modes of a shell structure. Furthermore, an optimisation strategy by the combination of the Wave FEM and genetic algorithm was presented for periodically stiffened shell structures. The optimal out-of-plane band gap and the mass of the whole structure can be achieved by the optimisation strategy under an aerodynamic load. Results also indicate that geometric parameters of stiffeners can be properly selected that the out-of-plane vibration attenuates significantly in the frequency band of interest. This study can provide valuable references for designing the band gaps of vibration isolation.

  7. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    PubMed

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  9. Payload Instrument Design Rules for Safe and Efficient Flight Operations

    NASA Astrophysics Data System (ADS)

    Montagnon, E.; Ferri, P.

    2004-04-01

    Payload operations are often being neglected in favour of optimisation of scientific performance of the instrument design. This has major drawbacks in terms of cost, safety, efficiency of operations and finally science return. By taking operational aspects into account in the early phases of the instrument design, with a minimum more cultural than financial or technological additional effort, many problems can be avoided or minimized, with significant benefits to be gained in the mission execution phases. This paper presents possible improvements based on the use of the telemetry and telecommand packet standard, proper sharing of autonomy functions between instrument and platform, and enhanced interface documents.

  10. Optimisation of phase ratio in the triple jump using computer simulation.

    PubMed

    Allen, Sam J; King, Mark A; Yeadon, M R Fred

    2016-04-01

    The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Rational Design of Highly Potent and Slow-Binding Cytochrome bc1 Inhibitor as Fungicide by Computational Substitution Optimization

    PubMed Central

    Hao, Ge-Fei; Yang, Sheng-Gang; Huang, Wei; Wang, Le; Shen, Yan-Qing; Tu, Wen-Long; Li, Hui; Huang, Li-Shar; Wu, Jia-Wei; Berry, Edward A.; Yang, Guang-Fu

    2015-01-01

    Hit to lead (H2L) optimization is a key step for drug and agrochemical discovery. A critical challenge for H2L optimization is the low efficiency due to the lack of predictive method with high accuracy. We described a new computational method called Computational Substitution Optimization (CSO) that has allowed us to rapidly identify compounds with cytochrome bc1 complex inhibitory activity in the nanomolar and subnanomolar range. The comprehensively optimized candidate has proved to be a slow binding inhibitor of bc1 complex, ~73-fold more potent (Ki = 4.1 nM) than the best commercial fungicide azoxystrobin (AZ; Ki = 297.6 nM) and shows excellent in vivo fungicidal activity against downy mildew and powdery mildew disease. The excellent correlation between experimental and calculated binding free-energy shifts together with further crystallographic analysis confirmed the prediction accuracy of CSO method. To the best of our knowledge, CSO is a new computational approach to substitution-scanning mutagenesis of ligand and could be used as a general strategy of H2L optimisation in drug and agrochemical design.

  12. Study on optimal design of 210kW traction IPMSM considering thermal demagnetization characteristics

    NASA Astrophysics Data System (ADS)

    Kim, Young Hyun; Lee, Seong Soo; Cheon, Byung Chul; Lee, Jung Ho

    2018-04-01

    This study analyses the permanent magnet (PM) used in the rotor of an interior permanent magnet synchronous motor (IPMSM) used for driving an electric railway vehicle (ERV) in the context of controllable shape, temperature, and external magnetic field. The positioning of the inserted magnets is a degree of freedom in the design of such machines. This paper describes a preliminary analysis using parametric finite-element method performed with the aim of achieving an effective design. Next, features of the experimental design, based on methods such as the central-composition method, Box-Behnken and Taguchi method, are explored to optimise the shape of the high power density. The results are used to produce an optimal design for IPMSMs, with design errors minimized using Maxwell 2D, a commercial program. Furthermore, the demagnetization process is analysed based on the magnetization and demagnetization theory for PM materials in computer simulation. The result of the analysis can be used to calculate the magnetization and demagnetization phenomenon according to the input B-H curve. This paper presents the conditions for demagnetization by the external magnetic field in the driving and stopped states, and proposes a simulation method that can analyse demagnetization phenomena according to each condition and design the IPMSM that maximizes efficiency and torque characteristics. Finally, operational characteristics are analysed in terms of the operation patterns of railway vehicles, and control conditions are deduced to achieve maximum efficiency in all sections. This was experimentally verified.

  13. Vertical phase separation in bulk heterojunction solar cells formed by in situ polymerization of fulleride

    PubMed Central

    Zhang, Lipei; Xing, Xing; Zheng, Lingling; Chen, Zhijian; Xiao, Lixin; Qu, Bo; Gong, Qihuang

    2014-01-01

    Vertical phase separation of the donor and the acceptor in organic bulk heterojunction solar cells is crucial to improve the exciton dissociation and charge transport efficiencies. This is because whilst the exciton diffusion length is limited, the organic film must be thick enough to absorb sufficient light. However, it is still a challenge to control the phase separation of a binary blend in a bulk heterojunction device architecture. Here we report the realization of vertical phase separation induced by in situ photo-polymerization of the acrylate-based fulleride. The power conversion efficiency of the devices with vertical phase separation increased by 20%. By optimising the device architecture, the power conversion efficiency of the single junction device reached 8.47%. We believe that in situ photo-polymerization of acrylate-based fulleride is a universal and controllable way to realise vertical phase separation in organic blends. PMID:24861168

  14. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  15. Targeted flock/herd and individual ruminant treatment approaches.

    PubMed

    Kenyon, F; Jackson, F

    2012-05-04

    In Europe, most nematodoses are subclinical involving morbid rather than mortal effects and control is largely achieved using anthelmintics. In cattle, the genera most associated with sub-optimal performance are Ostertagia and Cooperia whereas in sheep and goats, subclinical losses are most often caused by Teladorsagia and Trichostrongylus. In some regions, at certain times, other species such as Nematodirus and Haemonchus also cause disease in sheep and goats. Unfortunately, anthelmintic resistance has now become an issue for European small ruminant producers. One of the key aims of the EU-funded PARASOL project was to identify low input and sustainable approaches to control nematode parasites in ruminants using refugia-based strategies. Two approaches to optimise anthelmintic treatments in sheep and cattle were studied; targeted treatments (TT) - whole-group treatments optimised on the basis of a marker of infection e.g. faecal egg count (FEC), and targeted selected treatment (TST) - treatments given to identified individuals to provide epidemiological and/or production benefits. A number of indicators for TT and TST were assessed to define parasitological and production-system specific indicators for treatment that best suited the regions where the PARASOL studies were conducted. These included liveweight gain, production efficiency, FEC, body condition score and diarrhoea score in small ruminants, and pepsinogen levels and Ostertagia bulk milk tank ELISA in cattle. The PARASOL studies confirmed the value of monitoring FEC as a means of targeting whole-flock treatments in small ruminants. In cattle, bulk milk tank ELISA and serum pepsinogen assays could be used retrospectively to determine the levels of exposure and hence, in the next season to optimise anthelmintic usage. TST approaches in sheep and goats examined production efficiency and liveweight gain as indicators for treatment and confirmed the value of this approach in maintaining performance and anthelmintic susceptibility in the predominant gastrointestinal nematodes. There is good evidence that the TST approach selected less heavily for the development of resistance in comparison to routine monthly treatments. Further research is required to optimise markers for TT and TST but it is also crucial to encourage producers/advisors to adapt these refugia-based strategies to maintain drug susceptible parasites in order to provide sustainable control. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Radiation exposure in X-ray-based imaging techniques used in osteoporosis

    PubMed Central

    Adams, Judith E.; Guglielmi, Giuseppe; Link, Thomas M.

    2010-01-01

    Recent advances in medical X-ray imaging have enabled the development of new techniques capable of assessing not only bone quantity but also structure. This article provides (a) a brief review of the current X-ray methods used for quantitative assessment of the skeleton, (b) data on the levels of radiation exposure associated with these methods and (c) information about radiation safety issues. Radiation doses associated with dual-energy X-ray absorptiometry are very low. However, as with any X-ray imaging technique, each particular examination must always be clinically justified. When an examination is justified, the emphasis must be on dose optimisation of imaging protocols. Dose optimisation is more important for paediatric examinations because children are more vulnerable to radiation than adults. Methods based on multi-detector CT (MDCT) are associated with higher radiation doses. New 3D volumetric hip and spine quantitative computed tomography (QCT) techniques and high-resolution MDCT for evaluation of bone structure deliver doses to patients from 1 to 3 mSv. Low-dose protocols are needed to reduce radiation exposure from these methods and minimise associated health risks. PMID:20559834

  17. Probabilistic Sizing and Verification of Space Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit

    2012-07-01

    Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.

  18. Breast tumor malignancy modelling using evolutionary neural logic networks.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia

    2006-01-01

    The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.

  19. Machine learning strategy for accelerated design of polymer dielectrics

    DOE PAGES

    Mannodi-Kanakkithodi, Arun; Pilania, Ghanshyam; Huan, Tran Doan; ...

    2016-02-15

    The ability to efficiently design new and advanced dielectric polymers is hampered by the lack of sufficient, reliable data on wide polymer chemical spaces, and the difficulty of generating such data given time and computational/experimental constraints. Here, we address the issue of accelerating polymer dielectrics design by extracting learning models from data generated by accurate state-of-the-art first principles computations for polymers occupying an important part of the chemical subspace. The polymers are ‘fingerprinted’ as simple, easily attainable numerical representations, which are mapped to the properties of interest using a machine learning algorithm to develop an on-demand property prediction model. Further,more » a genetic algorithm is utilised to optimise polymer constituent blocks in an evolutionary manner, thus directly leading to the design of polymers with given target properties. Furthermore, while this philosophy of learning to make instant predictions and design is demonstrated here for the example of polymer dielectrics, it is equally applicable to other classes of materials as well.« less

  20. Paediatric CT protocol optimisation: a design of experiments to support the modelling and optimisation process.

    PubMed

    Rani, K; Jahnen, A; Noel, A; Wolf, D

    2015-07-01

    In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Impact of New Water Sources on the Overall Water Network: An Optimisation Approach

    PubMed Central

    Jones, Brian C.; Hove-Musekwa, Senelani D.

    2014-01-01

    A mathematical programming problem is formulated for a water network with new water sources included. Salinity and water hardness are considered in the model, which is later solved using the Max-Min Ant System (MMAS) to assess the impact of new water sources on the total cost of the existing network. It is efficient to include new water sources if the distances to them are short or if there is a high penalty associated with failure to meet demand. Desalination unit costs also significantly affect the decision whether to install new water sources into the existing network while softening costs are generally negligible in making such decisions. Experimental results show that, in the example considered, it is efficient to reduce number of desalination plants to remain with one central plant. The Max-Min Ant System algorithm seems to be an effective method as shown by least computational time as compared to the commercial solver Cplex. PMID:27382617

  2. Examples of Mesh and NURBS modelling for in vivo lung counting studies.

    PubMed

    Farah, Jad; Broggio, David; Franck, Didier

    2011-03-01

    Realistic calibration coefficients for in vivo counting installations are assessed using voxel phantoms and Monte Carlo calculations. However, voxel phantoms construction is time consuming and their flexibility extremely limited. This paper involves Mesh and non-uniform rational B-splines graphical formats, of greater flexibility, to optimise the calibration of in vivo counting installations. Two studies validating the use of such phantoms and involving geometry deformation and modelling were carried out to study the morphologic effect on lung counting efficiency. The created 3D models fitted with the reference ones, with volumetric differences of <5 %. Moreover, it was found that counting efficiency varies with the inverse of lungs' volume and that the latter primes when compared with chest wall thickness. Finally, a series of different thoracic female phantoms of various cup sizes, chest girths and internal organs' volumes were created starting from the International Commission on Radiological Protection (ICRP) adult female reference computational phantom to give correction factors for the lung monitoring of female workers.

  3. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.

  4. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  5. Identification, optimisation and in vivo evaluation of oxadiazole DGAT-1 inhibitors for the treatment of obesity and diabetes.

    PubMed

    McCoull, William; Addie, Matthew S; Birch, Alan M; Birtles, Susan; Buckett, Linda K; Butlin, Roger J; Bowker, Suzanne S; Boyd, Scott; Chapman, Stephen; Davies, Robert D M; Donald, Craig S; Green, Clive P; Jenner, Chloe; Kemmitt, Paul D; Leach, Andrew G; Moody, Graeme C; Gutierrez, Pablo Morentin; Newcombe, Nicholas J; Nowak, Thorsten; Packer, Martin J; Plowright, Alleyn T; Revill, John; Schofield, Paul; Sheldon, Chris; Stokes, Steve; Turnbull, Andrew V; Wang, Steven J Y; Whalley, David P; Wood, J Matthew

    2012-06-15

    A novel series of DGAT-1 inhibitors was discovered from an oxadiazole amide high throughput screening (HTS) hit. Optimisation of potency and ligand lipophilicity efficiency (LLE) resulted in a carboxylic acid containing clinical candidate 53 (AZD3988), which demonstrated excellent DGAT-1 potency (0.6 nM), good pharmacokinetics and pre-clinical in vivo efficacy that could be rationalised through a PK/PD relationship. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach

    NASA Astrophysics Data System (ADS)

    Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.

    2017-10-01

    ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.

  7. An improved PSO-SVM model for online recognition defects in eddy current testing

    NASA Astrophysics Data System (ADS)

    Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin

    2013-12-01

    Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.

  8. Topology optimisation for natural convection problems

    NASA Astrophysics Data System (ADS)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-12-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  9. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  10. Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area

    NASA Astrophysics Data System (ADS)

    Khare, Vikas; Nema, Savita; Baredar, Prashant

    2017-04-01

    This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.

  11. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling

    PubMed Central

    Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139

  12. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    PubMed

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  13. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  14. Synthesis of Arbitrary Quantum Circuits to Topological Assembly: Systematic, Online and Compact.

    PubMed

    Paler, Alexandru; Fowler, Austin G; Wille, Robert

    2017-09-05

    It is challenging to transform an arbitrary quantum circuit into a form protected by surface code quantum error correcting codes (a variant of topological quantum error correction), especially if the goal is to minimise overhead. One of the issues is the efficient placement of magic state distillation sub circuits, so-called distillation boxes, in the space-time volume that abstracts the computation's required resources. This work presents a general, systematic, online method for the synthesis of such circuits. Distillation box placement is controlled by so-called schedulers. The work introduces a greedy scheduler generating compact box placements. The implemented software, whose source code is available at www.github.com/alexandrupaler/tqec, is used to illustrate and discuss synthesis examples. Synthesis and optimisation improvements are proposed.

  15. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  16. A new web-based modelling tool (Websim-MILQ) aimed at optimisation of thermal treatments in the dairy industry.

    PubMed

    Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P

    2008-11-30

    In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.

  17. Carbon dioxide sequestration using NaHSO4 and NaOH: A dissolution and carbonation optimisation study.

    PubMed

    Sanna, Aimaro; Steel, Luc; Maroto-Valer, M Mercedes

    2017-03-15

    The use of NaHSO 4 to leach out Mg fromlizardite-rich serpentinite (in form of MgSO 4 ) and the carbonation of CO 2 (captured in form of Na 2 CO 3 using NaOH) to form MgCO 3 and Na 2 SO 4 was investigated. Unlike ammonium sulphate, sodium sulphate can be separated via precipitation during the recycling step avoiding energy intensive evaporation process required in NH 4 -based processes. To determine the effectiveness of the NaHSO 4 /NaOH process when applied to lizardite, the optimisation of the dissolution and carbonation steps were performed using a UK lizardite-rich serpentine. Temperature, solid/liquid ratio, particle size, concentration and molar ratio were evaluated. An optimal dissolution efficiency of 69.6% was achieved over 3 h at 100 °C using 1.4 M sodium bisulphate and 50 g/l serpentine with particle size 75-150 μm. An optimal carbonation efficiency of 95.4% was achieved over 30 min at 90 °C and 1:1 magnesium:sodium carbonate molar ratio using non-synthesised solution. The CO 2 sequestration capacity was 223.6 g carbon dioxide/kg serpentine (66.4% in terms of Mg bonded to hydromagnesite), which is comparable with those obtained using ammonium based processes. Therefore, lizardite-rich serpentinites represent a valuable resource for the NaHSO 4 /NaOH based pH swing mineralisation process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Propulsive efficiency of frog swimming with different feet and swimming patterns

    PubMed Central

    Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu

    2017-01-01

    ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669

  20. Classification of osteoporosis by artificial neural network based on monarch butterfly optimisation algorithm.

    PubMed

    Devikanniga, D; Joshua Samuel Raj, R

    2018-04-01

    Osteoporosis is a life threatening disease which commonly affects women mostly after their menopause. It primarily causes mild bone fractures, which on advanced stage leads to the death of an individual. The diagnosis of osteoporosis is done based on bone mineral density (BMD) values obtained through various clinical methods experimented from various skeletal regions. The main objective of the authors' work is to develop a hybrid classifier model that discriminates the osteoporotic patient from healthy person, based on BMD values. In this Letter, the authors propose the monarch butterfly optimisation-based artificial neural network classifier which helps in earlier diagnosis and prevention of osteoporosis. The experiments were conducted using 10-fold cross-validation method for two datasets lumbar spine and femoral neck. The results were compared with other similar hybrid approaches. The proposed method resulted with the accuracy, specificity and sensitivity of 97.9% ± 0.14, 98.33% ± 0.03 and 95.24% ± 0.08, respectively, for lumbar spine dataset and 99.3% ± 0.16%, 99.2% ± 0.13 and 100, respectively, for femoral neck dataset. Further, its performance is compared using receiver operating characteristics analysis and Wilcoxon signed-rank test. The results proved that the proposed classifier is efficient and it outperformed the other approaches in all the cases.

  1. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  2. Medicines optimisation: priorities and challenges.

    PubMed

    Kaufman, Gerri

    2016-03-23

    Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.

  3. Markovian queue optimisation analysis with an unreliable server subject to working breakdowns and impatient customers

    NASA Astrophysics Data System (ADS)

    Liou, Cheng-Dar

    2015-09-01

    This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.

  4. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  5. Understanding Interactions between Hydrogeologic Factors, Design Variables, and System Operations for Multi-Well Aquifer Storage and Recovery Systems

    NASA Astrophysics Data System (ADS)

    Majumdar, S.; Miller, G. R.; Smith, B.; Sheng, Z.

    2017-12-01

    Aquifer Storage and Recovery (ASR) system is a powerful tool for managing our present and future freshwater supplies. It involves injection of excess water into an aquifer, storing and later recovering it when needed, such as in a drought or during peak demand periods. Multi-well ASR systems, such as the Twin Oaks Facility in San Antonio, consist of a group of wells that are used for simultaneous injection and extraction of stored water. While significant research has gone into examining the effects of hydraulic and operational factors on recovery efficiency for single ASR well, little is known about how multi-well systems respond to these factors and how energy uses may vary. In this study, we created a synthetic ASR model in MODFLOW to test a range of multi-well scenarios. We altered design parameters (well spacing, pumping capacity, well configuration), hydrogeologic factors (regional hydraulic gradient, hydraulic conductivity, dispersivity), and operational variables (injection and withdrawal durations; pumping rates) to determine the response of the system across a realistic range of interrelated parameters. We then computed energy use for each simulation, based on the hydraulic head in each well and standard pump factors, as well as recovery efficiency, based on tracer concentration in recovered water from the wells. The tracer concentration in the groundwater was determined using MT3DMS. We observed that the recovery and energy efficiencies for the Multi-well ASR system decrease with the increase in well spacing and hydraulic gradient. When longitudinal dispersivity was doubled, the recovery and energy efficiencies were nearly halved. Another finding from our study suggests that we can recover nearly 90% of the water after two successive cycles of operation. The results will be used to develop generalized operational guidelines for meeting freshwater demands and also optimise the energy consumed during pumping.

  6. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  7. Population Fisher information matrix and optimal design of discrete data responses in population pharmacodynamic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2011-08-01

    In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.

  8. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  9. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  10. Towards the optimisation and adaptation of dry powder inhalers.

    PubMed

    Cui, Y; Schmalfuß, S; Zellnitz, S; Sommerfeld, M; Urbanetz, N

    2014-08-15

    Pulmonary drug delivery by dry powder inhalers is becoming more and more popular. Such an inhalation device must insure that during the inhalation process the drug powder is detached from the carrier due to fluid flow stresses. The goal of the project is the development of a drug powder detachment model to be used in numerical computations (CFD, computational fluid dynamics) of fluid flow and carrier particle motion through the inhaler and the resulting efficiency of drug delivery. This programme will be the basis for the optimisation of inhaler geometry and dry powder inhaler formulation. For this purpose a multi-scale approach is adopted. First the flow field through the inhaler is numerically calculated with OpenFOAM(®) and the flow stresses experienced by the carrier particles are recorded. This information is used for micro-scale simulations using the Lattice-Boltzmann method where only one carrier particle covered with drug powder is placed in cubic flow domain and exposed to the relevant flow situations, e.g. plug and shear flow with different Reynolds numbers. Therefrom the fluid forces on the drug particles are obtained. In order to allow the determination of the drug particle detachment possibility by lift-off, sliding or rolling, also measurements by AFM (atomic force microscope) were conducted for different carrier particle surface structures. The contact properties, such as van der Waals force, friction coefficient and adhesion surface energy were used to determine, from a force or moment balance (fluid forces versus contact forces), the detachment probability by the three mechanisms as a function of carrier particle Reynolds number. These results will be used for deriving the drug powder detachment model. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context

    NASA Astrophysics Data System (ADS)

    Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian

    2016-05-01

    The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.

  12. Ionic liquid-based ultrasonic/microwave-assisted extraction combined with UPLC-MS-MS for the determination of tannins in Galla chinensis.

    PubMed

    Lu, Chunxia; Wang, Hongxin; Lv, Wenping; Ma, Chaoyang; Lou, Zaixiang; Xie, Jun; Liu, Bo

    2012-01-01

    Ionic liquid was used as extraction solvents and applied to the extraction of tannins from Galla chinensis in the simultaneous ultrasonic- and microwave-assisted extraction (UMAE) technique. Several parameters of UMAE were optimised, and the results were compared with of the conventional extraction techniques. Under optimal conditions, the content of tannins was 630.2 ± 12.1 mg g⁻¹. Compared with the conventional heat-reflux extraction, maceration extraction, regular ultrasound- and microwave-assisted extraction, the proposed approach exhibited higher efficiency (11.7-22.0% enhanced) and shorter extraction time (from 6 h to 1 min). The tannins were then identified by ultraperformance liquid chromatography tandem mass spectrometry. This study suggests that ionic liquid-based UMAE is an efficient, rapid, simple and green sample preparation technique.

  13. Power generation based on biomass by combined fermentation and gasification--a new concept derived from experiments and modelling.

    PubMed

    Methling, Torsten; Armbrust, Nina; Haitz, Thilo; Speidel, Michael; Poboss, Norman; Braun-Unkhoff, Marina; Dieter, Heiko; Kempter-Regel, Brigitte; Kraaij, Gerard; Schliessmann, Ursula; Sterr, Yasemin; Wörner, Antje; Hirth, Thomas; Riedel, Uwe; Scheffknecht, Günter

    2014-10-01

    A new concept is proposed for combined fermentation (two-stage high-load fermenter) and gasification (two-stage fluidised bed gasifier with CO2 separation) of sewage sludge and wood, and the subsequent utilisation of the biogenic gases in a hybrid power plant, consisting of a solid oxide fuel cell and a gas turbine. The development and optimisation of the important processes of the new concept (fermentation, gasification, utilisation) are reported in detail. For the gas production, process parameters were experimentally and numerically investigated to achieve high conversion rates of biomass. For the product gas utilisation, important combustion properties (laminar flame speed, ignition delay time) were analysed numerically to evaluate machinery operation (reliability, emissions). Furthermore, the coupling of the processes was numerically analysed and optimised by means of integration of heat and mass flows. The high, simulated electrical efficiency of 42% including the conversion of raw biomass is promising for future power generation by biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  15. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    NASA Astrophysics Data System (ADS)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  16. Temperature effects on tunable cw Alexandrite lasers under diode end-pumping.

    PubMed

    Kerridge-Johns, William R; Damzen, Michael J

    2018-03-19

    Diode pumped Alexandrite is a promising route to high power, efficient and inexpensive lasers with a broad (701 nm to 858 nm) gain bandwidth; however, there are challenges with its complex laser dynamics. We present an analytical model applied to experimental red diode end-pumped Alexandrite lasers, which enabled a record 54 % slope efficiency with an output power of 1.2 W. A record lowest lasing wavelength (714 nm) and record tuning range (104 nm) was obtained by optimising the crystal temperature between 8 °C and 105 °C in the vibronic mode. The properties of Alexandrite and the analytical model were examined to understand and give general rules in optimising Alexandrite lasers, along with their fundamental efficiency limits. It was found that the lowest threshold laser wavelength was not necessarily the most efficient, and that higher and lower temperatures were optimal for longer and shorter laser wavelengths, respectively. The pump excited to ground state absorption ratio was measured to decrease from 0.8 to 0.7 by changing the crystal temperature from 10 °C to 90 °C.

  17. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    PubMed

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.

  18. Geomechanical Analysis of Underground Coal Gasification Reactor Cool Down for Subsequent CO2 Storage

    NASA Astrophysics Data System (ADS)

    Sarhosis, Vasilis; Yang, Dongmin; Kempka, Thomas; Sheng, Yong

    2013-04-01

    Underground coal gasification (UCG) is an efficient method for the conversion of conventionally unmineable coal resources into energy and feedstock. If the UCG process is combined with the subsequent storage of process CO2 in the former UCG reactors, a near-zero carbon emission energy source can be realised. This study aims to present the development of a computational model to simulate the cooling process of UCG reactors in abandonment to decrease the initial high temperature of more than 400 °C to a level where extensive CO2 volume expansion due to temperature changes can be significantly reduced during the time of CO2 injection. Furthermore, we predict the cool down temperature conditions with and without water flushing. A state of the art coupled thermal-mechanical model was developed using the finite element software ABAQUS to predict the cavity growth and the resulting surface subsidence. In addition, the multi-physics computational software COMSOL was employed to simulate the cavity cool down process which is of uttermost relevance for CO2 storage in the former UCG reactors. For that purpose, we simulated fluid flow, thermal conduction as well as thermal convection processes between fluid (water and CO2) and solid represented by coal and surrounding rocks. Material properties for rocks and coal were obtained from extant literature sources and geomechanical testings which were carried out on samples derived from a prospective demonstration site in Bulgaria. The analysis of results showed that the numerical models developed allowed for the determination of the UCG reactor growth, roof spalling, surface subsidence and heat propagation during the UCG process and the subsequent CO2 storage. It is anticipated that the results of this study can support optimisation of the preparation procedure for CO2 storage in former UCG reactors. The proposed scheme was discussed so far, but not validated by a coupled numerical analysis and if proved to be applicable it could provide a significant optimisation of the UCG process by means of CO2 storage efficiency. The proposed coupled UCG-CCS scheme allows for meeting EU targets for greenhouse gas emissions and increases the coal yield otherwise impossible to exploit.

  19. Usage of humic materials for formulation of stable microbial inoculants

    NASA Astrophysics Data System (ADS)

    Kydralieva, K. A.; Khudaibergenova, B. M.; Elchin, A. A.; Gorbunova, N. V.; Muratov, V. S.; Jorobekova, Sh. J.

    2009-04-01

    Some microbes have been domesticated for environment service, for example in a variety of novel applications, including efforts to reduce environmental problems. For instance, antagonistic organisms can be used as biological control agents to reduce the use of chemical pesticides, or efficient degraders can be applied as bioprophylactics to minimise the spread of chemical pollutants. Microorganisms can also be used for the biological clean-up of polluted soil or as plant growth-promoting bacteria that stimulate nutrient uptake. Many microbial applications require large-scale cultivation of the organisms. The biomass production must then be followed by formulation steps to ensure long-term stability and convenient use. However, there remains a need to further develop knowledge on how to optimise fermentation of "non-conventional microorganisms" for environmental applications involving the intact living cells. The goal of presented study is to develop fermentation and formulation techniques for termolabile rhizobacteria isolates - Pseudomonas spp. with major biotechnical potential. Development of efficient and cost-effective media and process parameters giving high cell yields are important priorities. This also involves establishing fermentation parameters yielding cells well adapted to subsequent formulation procedures. Collectively, these strategies will deliver a high proportion of viable cells with good long-term survival. Our main efforts were focused on development of more efficient drying techniques for microorganisms, particularly spray drying and fluidised bed-drying. The advantages of dry formulations are that storage and delivery costs are much lower than for liquid formulations and that long-term survival can be very high if initial packaging is carefully optimised. In order to improve and optimise formulations various kinds of humics-based excipients have been added that have beneficial effects on the viability of the organisms and the storage stability of the product. It is known that humic substances can increase of live organism resistance to stress loads, in particular to chemical stress, low and high temperature. Spray- and fluidized-bed drying and addition of humate-based drying protectants were evaluated for the development of dry formulations of biocontrol and plant growth promoting rhizobacteria. The drying protectants - humic acids and sodium humate gave the highest initial survival rates and the most stable formulations, without significant losses of viability after storage for 1 month at 30oC. As a result, the specific plant growth promoting effect is retained. Thus, humic materials have an unfulfilled potential for biotechnology industries based on such applications. Acknowledgement. This research was supported by the grant of ISTC KR-993.2.

  20. Easy-Going On-Spectrometer Optimisation of Phase Modulated Homonuclear Decoupling Sequences in Solid-State NMR

    NASA Astrophysics Data System (ADS)

    Grimminck, Dennis L. A. G.; Vasa, Suresh K.; Meerts, W. Leo; Kentgens, P. M.

    2011-06-01

    A global optimisation scheme for phase modulated proton homonuclear decoupling sequences in solid-state NMR is presented. Phase modulations, parameterised by DUMBO Fourier coefficients, were optimized using a Covariance Matrix Adaptation Evolution Strategies algorithm. Our method, denoted EASY-GOING homonuclear decoupling, starts with featureless spectra and optimises proton-proton decoupling, during either proton or carbon signal detection. On the one hand, our solutions closely resemble (e)DUMBO for moderate sample spinning frequencies and medium radio-frequency (rf) field strengths. On the other hand, the EASY-GOING approach resulted in a superior solution, achieving significantly better resolved proton spectra at very high 680 kHz rf field strength. N. Hansen, and A. Ostermeier. Evol. Comput. 9 (2001) 159-195 B. Elena, G. de Paepe, L. Emsley. Chem. Phys. Lett. 398 (2004) 532-538

  1. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    PubMed

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  2. Optimisation of phenolic extraction from Averrhoa carambola pomace by response surface methodology and its microencapsulation by spray and freeze drying.

    PubMed

    Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata

    2015-03-15

    Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Development of a Compton suppressed gamma spectrometer using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Britton, Richard

    Gamma ray spectroscopy is routinely used to measure radiation in a number of situations. These include security applications, nuclear forensics studies, characterisation of radioactive sources, and environmental monitoring. For routine studies of environmental materials, the amount of radioactivity present is often very low, requiring spectroscopy systems which have to monitor the source for up to 7 days to achieve the required sensitivity. Recent developments in detector technology and data processing techniques have opened up the possibility of developing a highly efficient Compton Suppressed system, that was previously the preserve of large experimental collaborations. The accessibility of Monte-Carlo toolkits such as GEANT4 also provide the opportunity to optimise these systems using computer simulations, greatly reducing the need for expensive (and inefficient) testing in the laboratory. This thesis details the development of such a Compton Suppressed, planar HPGe detector system. Using the GEANT4 toolkit in combination with the experimental facilities at AWE, Aldermaston (which include HPGe detection systems, scintillator based detector systems, advanced shielding materials and gamma-gamma coincidence systems), simulations were built and validated to reproduce the detector response seen in the 'real-life' systems. This resulted in several improvements to the current system; for the shielding materials used, terrestrial and cosmic radiation were minimised, while reducing the X-ray fluorescence seen in the primary HPGe detector by an order of magnitude. With respect to the HPGe detector itself, an optimum thickness was identified for low energy (<300 keV) radiation, which maximised the efficiency for the energy range of interest while minimising the interaction probability for higher energy radionuclides (which are the primary cause of the Compton continuum that obscures lower energy decays). A combination of secondary detectors were then optimised to design a Compton Suppression system for the primary detector, which could improve the performance of the current Compton Suppression system by an order of magnitude. This equates to a reduction of the continuum by up to a factor of 240 for a nuclide such as Co-60, which is crucial for the detection of low-energy, low-activity emitters typically swamped by such a continuum. Finally, thoroughly optimised acquisition and analysis software has also been written to process data created by future high sensitivity gamma coincidence systems. This includes modules for the creation of histograms, coincidence matrices, and an ASCII to binary converter (for historical data) that has resulted in an analysis speed increase of up to 20000 times when compared to the software originally used for the extraction of coincidence information. Modules for low-energy time-walk correction and the removal of accidental coincidences are also included, which represent a capability that was not previously available.

  4. Structure optimisation by thermal cycling for the hydrophobic-polar lattice model of protein folding

    NASA Astrophysics Data System (ADS)

    Günther, Florian; Möbius, Arnulf; Schreiber, Michael

    2017-03-01

    The function of a protein depends strongly on its spatial structure. Therefore the transition from an unfolded stage to the functional fold is one of the most important problems in computational molecular biology. Since the corresponding free energy landscapes exhibit huge numbers of local minima, the search for the lowest-energy configurations is very demanding. Because of that, efficient heuristic algorithms are of high value. In the present work, we investigate whether and how the thermal cycling (TC) approach can be applied to the hydrophobic-polar (HP) lattice model of protein folding. Evaluating the efficiency of TC for a set of two- and three-dimensional examples, we compare the performance of this strategy with that of multi-start local search (MSLS) procedures and that of simulated annealing (SA). For this aim, we incorporated several simple but rather efficient modifications into the standard procedures: in particular, a strong improvement was achieved by also allowing energy conserving state modifications. Furthermore, the consideration of ensembles instead of single samples was found to greatly improve the efficiency of TC. In the framework of different benchmarks, for all considered HP sequences, we found TC to be far superior to SA, and to be faster than Wang-Landau sampling.

  5. Reward-based spatial crowdsourcing with differential privacy preservation

    NASA Astrophysics Data System (ADS)

    Xiong, Ping; Zhang, Lefeng; Zhu, Tianqing

    2017-11-01

    In recent years, the popularity of mobile devices has transformed spatial crowdsourcing (SC) into a novel mode for performing complicated projects. Workers can perform tasks at specified locations in return for rewards offered by employers. Existing methods ensure the efficiency of their systems by submitting the workers' exact locations to a centralised server for task assignment, which can lead to privacy violations. Thus, implementing crowsourcing applications while preserving the privacy of workers' location is a key issue that needs to be tackled. We propose a reward-based SC method that achieves acceptable utility as measured by task assignment success rates, while efficiently preserving privacy. A differential privacy model ensures rigorous privacy guarantee, and Laplace noise is introduced to protect workers' exact locations. We then present a reward allocation mechanism that adjusts each piece of the reward for a task using the distribution of the workers' locations. Through experimental results, we demonstrate that this optimised-reward method is efficient for SC applications.

  6. IEEE 1982. Proceedings of the international conference on cybernetics and society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.

  7. OptiPhy, a technical-economic optimisation model for improving the management of plant protection practices in agriculture: a decision-support tool for controlling the toxicity risks related to pesticides.

    PubMed

    Mghirbi, Oussama; LE Grusse, Philippe; Fabre, Jacques; Mandart, Elisabeth; Bord, Jean-Paul

    2017-03-01

    The health, environmental and socio-economic issues related to the massive use of plant protection products are a concern for all the stakeholders involved in the agricultural sector. These stakeholders, including farmers and territorial actors, have expressed a need for decision-support tools for the management of diffuse pollution related to plant protection practices and their impacts. To meet the needs expressed by the public authorities and the territorial actors for such decision-support tools, we have developed a technical-economic model "OptiPhy" for risk mitigation based on indicators of pesticide toxicity risk to applicator health (IRSA) and to the environment (IRTE), under the constraint of suitable economic outcomes. This technical-economic optimisation model is based on linear programming techniques and offers various scenarios to help the different actors in choosing plant protection products, depending on their different levels of constraints and aspirations. The health and environmental risk indicators can be broken down into sub-indicators so that management can be tailored to the context. This model for technical-economic optimisation and management of plant protection practices can analyse scenarios for the reduction of pesticide-related risks by proposing combinations of substitution PPPs, according to criteria of efficiency, economic performance and vulnerability of the natural environment. The results of the scenarios obtained on real ITKs in different cropping systems show that it is possible to reduce the PPP pressure (TFI) and reduce toxicity risks to applicator health (IRSA) and to the environment (IRTE) by up to approximately 50 %.

  8. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    NASA Astrophysics Data System (ADS)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  9. Performance Analysis and Discussion on the Thermoelectric Element Footprint for PV-TE Maximum Power Generation

    NASA Astrophysics Data System (ADS)

    Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson

    2018-06-01

    Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.

  10. Ambient occlusion - A powerful algorithm to segment shell and skeletal intrapores in computed tomography data

    NASA Astrophysics Data System (ADS)

    Titschack, J.; Baum, D.; Matsuyama, K.; Boos, K.; Färber, C.; Kahl, W.-A.; Ehrig, K.; Meinel, D.; Soriano, C.; Stock, S. R.

    2018-06-01

    During the last decades, X-ray (micro-)computed tomography has gained increasing attention for the description of porous skeletal and shell structures of various organism groups. However, their quantitative analysis is often hampered by the difficulty to discriminate cavities and pores within the object from the surrounding region. Herein, we test the ambient occlusion (AO) algorithm and newly implemented optimisations for the segmentation of cavities (implemented in the software Amira). The segmentation accuracy is evaluated as a function of (i) changes in the ray length input variable, and (ii) the usage of AO (scalar) field and other AO-derived (scalar) fields. The results clearly indicate that the AO field itself outperforms all other AO-derived fields in terms of segmentation accuracy and robustness against variations in the ray length input variable. The newly implemented optimisations improved the AO field-based segmentation only slightly, while the segmentations based on the AO-derived fields improved considerably. Additionally, we evaluated the potential of the AO field and AO-derived fields for the separation and classification of cavities as well as skeletal structures by comparing them with commonly used distance-map-based segmentations. For this, we tested the zooid separation within a bryozoan colony, the stereom classification of an ophiuroid tooth, the separation of bioerosion traces within a marble block and the calice (central cavity)-pore separation within a dendrophyllid coral. The obtained results clearly indicate that the ideal input field depends on the three-dimensional morphology of the object of interest. The segmentations based on the AO-derived fields often provided cavity separations and skeleton classifications that were superior to or impossible to obtain with commonly used distance-map-based segmentations. The combined usage of various AO-derived fields by supervised or unsupervised segmentation algorithms might provide a promising target for future research to further improve the results for this kind of high-end data segmentation and classification. Furthermore, the application of the developed segmentation algorithm is not restricted to X-ray (micro-)computed tomographic data but may potentially be useful for the segmentation of 3D volume data from other sources.

  11. Cultural-based particle swarm for dynamic optimisation problems

    NASA Astrophysics Data System (ADS)

    Daneshyari, Moayed; Yen, Gary G.

    2012-07-01

    Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.

  12. Optimisation of logistics processes of energy grass collection

    NASA Astrophysics Data System (ADS)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu

  13. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  14. Set-membership fault detection under noisy environment with application to the detection of abnormal aircraft control surface positions

    NASA Astrophysics Data System (ADS)

    El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali

    2015-09-01

    The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.

  15. pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data

    NASA Astrophysics Data System (ADS)

    Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.

    The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.

  16. Efficient exploration of chemical space by fragment-based screening.

    PubMed

    Hall, Richard J; Mortenson, Paul N; Murray, Christopher W

    2014-01-01

    Screening methods seek to sample a vast chemical space in order to identify starting points for further chemical optimisation. Fragment based drug discovery exploits the superior sampling of chemical space that can be achieved when the molecular weight is restricted. Here we show that commercially available fragment space is still relatively poorly sampled and argue for highly sensitive screening methods to allow the detection of smaller fragments. We analyse the properties of our fragment library versus the properties of X-ray hits derived from the library. We particularly consider properties related to the degree of planarity of the fragments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Monolithic poly[(trimethylsilyl-4-methylstyrene)-co- bis(4-vinylbenzyl)dimethylsilane] stationary phases for the fast separation of proteins and oligonucleotides.

    PubMed

    Jakschitz, Thomas A E; Huck, Christian W; Lubbad, Said; Bonn, Günther K

    2007-04-13

    In this paper the synthesis, optimisation and application of a silane based monolithic copolymer for the rapid separation of proteins and oligonucleotides is described. The monolith was prepared by thermal initiated in situ copolymerisation of trimethylsilyl-4-methylstyrene (TMSiMS) and bis(4-vinylbenzyl)dimethylsilane (BVBDMSi) in a silanised 200 microm I.D. fused silica column. Different ratios of monomer and crosslinker, as well as different ratios of micro- (toluene) and macro-porogen (2-propanol) were used for optimising the physical properties of the stationary phase regarding separation efficiency. The prepared monolithic stationary phases were characterised by measurement of permeability with different solvents, determination of pore size distribution by mercury intrusion porosimetry (MIP). Morphology was studied by scanning electron microscopy (SEM). Applying optimised conditions, a mixture comprised of five standard proteins ribunuclease A, cytochrome c, alpha-lactalbumine, myoglobine and ovalbumine was separated within 1 min by ion-pair reversed-phase liquid chromatography (IP-RPLC) obtaining half-height peak widths between 1.8 and 2.4 s. Baseline separation of oligonucleotides d(pT)(12-18) was achieved within 1.8 min obtaining half-height peak widths between 3.6 and 5.4 s. The results demonstrate the high potential of this stationary phase for fast separation of high-molecular weight biomolecules such as oligonucleotides and proteins.

  18. Enhancement of temporal contrast of high-power laser pulses in an anisotropic medium with cubic nonlinearity

    NASA Astrophysics Data System (ADS)

    Kuz'mina, M. S.; Khazanov, E. A.

    2015-05-01

    We consider the methods for enhancing the temporal contrast of super-high-power laser pulses, based on the conversion of radiation polarisation in a medium with cubic nonlinearity. For a medium with weak birefringence and isotropic nonlinearity, we propose a new scheme to enhance the temporal contrast. For a medium with anisotropic nonlinearity, the efficiency of the temporal contrast optimisation is shown to depend not only on the spatial orientation of the crystal and B-integral, but also on the type of the crystal lattice symmetry.

  19. Optimisation of olive oil phenol extraction conditions using a high-power probe ultrasonication.

    PubMed

    Jerman Klen, T; Mozetič Vodopivec, B

    2012-10-15

    A new method of ultrasound probe assisted liquid-liquid extraction (US-LLE) combined with a freeze-based fat precipitation clean-up and HPLC-DAD-FLD-MS detection is described for extra virgin olive oil (EVOO) phenol analysis. Three extraction variables (solvent type; 100%, 80%, 50% methanol, sonication time; 5, 10, 20 min, extraction steps; 1-5) and two clean-up methods (n-hexane washing vs. low temperature fat precipitation) were studied and optimised with aim to maximise extracts' phenol recoveries. A three-step extraction of 10 min with pure methanol (5 mL) resulted in the highest phenol content of freeze-based defatted extracts (667 μg GAE g(-1)) from 10 g of EVOO, providing much higher efficiency (up to 68%) and repeatability (up to 51%) vs. its non-sonicated counterpart (LLE-agitation) and n-hexane washing. In addition, the overall method provided high linearity (r(2)≥0.97), precision (RSD: 0.4-9.3%) and sensitivity with LODs/LOQs ranging from 0.03 to 0.16 μg g(-1) and 0.10-0.51 μg g(-1) of EVOO, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  1. Transforming fragments into candidates: small becomes big in medicinal chemistry.

    PubMed

    de Kloe, Gerdien E; Bailey, David; Leurs, Rob; de Esch, Iwan J P

    2009-07-01

    Fragment-based drug discovery (FBDD) represents a logical and efficient approach to lead discovery and optimisation. It can draw on structural, biophysical and biochemical data, incorporating a wide range of inputs, from precise mode-of-binding information on specific fragments to wider ranging pharmacophoric screening surveys using traditional HTS approaches. It is truly an enabling technology for the imaginative medicinal chemist. In this review, we analyse a representative set of 23 published FBDD studies that describe how low molecular weight fragments are being identified and efficiently transformed into higher molecular weight drug candidates. FBDD is now becoming warmly endorsed by industry as well as academia and the focus on small interacting molecules is making a big scientific impact.

  2. A novel sleep optimisation programme to improve athletes' well-being and performance.

    PubMed

    Van Ryswyk, Emer; Weeks, Richard; Bandick, Laura; O'Keefe, Michaela; Vakulin, Andrew; Catcheside, Peter; Barger, Laura; Potter, Andrew; Poulos, Nick; Wallace, Jarryd; Antic, Nick A

    2017-03-01

    To improve well-being and performance indicators in a group of Australian Football League (AFL) players via a six-week sleep optimisation programme. Prospective intervention study following observations suggestive of reduced sleep and excessive daytime sleepiness in an AFL group. Athletes from the Adelaide Football Club were invited to participate if they had played AFL senior-level football for 1-5 years, or if they had excessive daytime sleepiness (Epworth Sleepiness Scale [ESS] >10), measured via ESS. An initial education session explained normal sleep needs, and how to achieve increased sleep duration and quality. Participants (n = 25) received ongoing feedback on their sleep, and a mid-programme education and feedback session. Sleep duration, quality and related outcomes were measured during week one and at the conclusion of the six-week intervention period using sleep diaries, actigraphy, ESS, Pittsburgh Sleep Quality Index, Profile of Mood States, Training Distress Scale, Perceived Stress Scale and the Psychomotor Vigilance Task. Sleep diaries demonstrated an increase in total sleep time of approximately 20 min (498.8 ± 53.8 to 518.7 ± 34.3; p < .05) and a 2% increase in sleep efficiency (p < 0.05). There was a corresponding increase in vigour (p < 0.001) and decrease in fatigue (p < 0.05). Improvements in measures of sleep efficiency, fatigue and vigour indicate that a sleep optimisation programme may improve athletes' well-being. More research is required into the effects of sleep optimisation on athletic performance.

  3. Frequency response function-based explicit framework for dynamic identification in human-structure systems

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Živanović, Stana

    2018-05-01

    The aim of this paper is to propose a novel theoretical framework for dynamic identification in a structure occupied by a single human. The framework enables the prediction of the dynamics of the human-structure system from the known properties of the individual system components, the identification of human body dynamics from the known dynamics of the empty structure and the human-structure system and the identification of the properties of the structure from the known dynamics of the human and the human-structure system. The novelty of the proposed framework is the provision of closed-form solutions in terms of frequency response functions obtained by curve fitting measured data. The advantages of the framework over existing methods are that there is neither need for nonlinear optimisation nor need for spatial/modal models of the empty structure and the human-structure system. In addition, the second-order perturbation method is employed to quantify the effect of uncertainties in human body dynamics on the dynamic identification of the empty structure and the human-structure system. The explicit formulation makes the method computationally efficient and straightforward to use. A series of numerical examples and experiments are provided to illustrate the working of the method.

  4. Lithospheric architecture of NE China from joint Inversions of receiver functions and surface wave dispersion through Bayesian optimisation

    NASA Astrophysics Data System (ADS)

    Sebastian, Nita; Kim, Seongryong; Tkalčić, Hrvoje; Sippl, Christian

    2017-04-01

    The purpose of this study is to develop an integrated inference on the lithospheric structure of NE China using three passive seismic networks comprised of 92 stations. The NE China plain consists of complex lithospheric domains characterised by the co-existence of complex geodynamic processes such as crustal thinning, active intraplate cenozoic volcanism and low velocity anomalies. To estimate lithospheric structures with greater detail, we chose to perform the joint inversion of independent data sets such as receiver functions and surface wave dispersion curves (group and phase velocity). We perform a joint inversion based on principles of Bayesian transdimensional optimisation techniques (Kim etal., 2016). Unlike in the previous studies of NE China, the complexity of the model is determined from the data in the first stage of the inversion, and the data uncertainty is computed based on Bayesian statistics in the second stage of the inversion. The computed crustal properties are retrieved from an ensemble of probable models. We obtain major structural inferences with well constrained absolute velocity estimates, which are vital for inferring properties of the lithosphere and bulk crustal Vp/Vs ratio. The Vp/Vs estimate obtained from joint inversions confirms the high Vp/Vs ratio ( 1.98) obtained using the H-Kappa method beneath some stations. Moreover, we could confirm the existence of a lower crustal velocity beneath several stations (eg: station SHS) within the NE China plain. Based on these findings we attempt to identify a plausible origin for structural complexity. We compile a high-resolution 3D image of the lithospheric architecture of the NE China plain.

  5. PeTTSy: a computational tool for perturbation analysis of complex systems biology models.

    PubMed

    Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A

    2016-03-10

    Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.

  6. Economic impact of optimising antiretroviral treatment in human immunodeficiency virus-infected adults with suppressed viral load in Spain, by implementing the grade A-1 evidence recommendations of the 2015 GESIDA/National AIDS Plan.

    PubMed

    Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz

    2018-03-01

    The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  7. Machine learning prediction for classification of outcomes in local minimisation

    NASA Astrophysics Data System (ADS)

    Das, Ritankar; Wales, David J.

    2017-01-01

    Machine learning schemes are employed to predict which local minimum will result from local energy minimisation of random starting configurations for a triatomic cluster. The input data consists of structural information at one or more of the configurations in optimisation sequences that converge to one of four distinct local minima. The ability to make reliable predictions, in terms of the energy or other properties of interest, could save significant computational resources in sampling procedures that involve systematic geometry optimisation. Results are compared for two energy minimisation schemes, and for neural network and quadratic functions of the inputs.

  8. A Robust, Water-Based, Functional Binder Framework for High-Energy Lithium-Sulfur Batteries.

    PubMed

    Lacey, Matthew J; Österlund, Viking; Bergfelt, Andreas; Jeschull, Fabian; Bowden, Tim; Brandell, Daniel

    2017-07-10

    We report here a water-based functional binder framework for the lithium-sulfur battery systems, based on the general combination of a polyether and an amide-containing polymer. These binders are applied to positive electrodes optimised towards high-energy electrochemical performance based only on commercially available materials. Electrodes with up to 4 mAh cm -2 capacity and 97-98 % coulombic efficiency are achievable in electrodes with a 65 % total sulfur content and a poly(ethylene oxide):poly(vinylpyrrolidone) (PEO:PVP) binder system. Exchange of either binder component for a different polymer with similar functionality preserves the high capacity and coulombic efficiency. The improvement in coulombic efficiency from the inclusion of the coordinating amide group was also observed in electrodes where pyrrolidone moieties were covalently grafted to the carbon black, indicating the role of this functionality in facilitating polysulfide adsorption to the electrode surface. The mechanical properties of the electrodes appear not to significantly influence sulfur utilisation or coulombic efficiency in the short term but rather determine retention of these properties over extended cycling. These results demonstrate the robustness of this very straightforward approach, as well as the considerable scope for designing binder materials with targeted properties. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Municipal solid waste transportation optimisation with vehicle routing approach: case study of Pontianak City, West Kalimantan

    NASA Astrophysics Data System (ADS)

    Kamal, M. A.; Youlla, D.

    2018-03-01

    Municipal solid waste (MSW) transportation in Pontianak City becomes an issue that need to be tackled by the relevant agencies. The MSW transportation service in Pontianak City currently requires very high resources especially in vehicle usage. Increasing the number of fleets has not been able to increase service levels while garbage volume is growing every year along with population growth. In this research, vehicle routing optimization approach was used to find optimal and efficient routes of vehicle cost in transporting garbage from several Temporary Garbage Dump (TGD) to Final Garbage Dump (FGD). One of the problems of MSW transportation is that there is a TGD which exceed the the vehicle capacity and must be visited more than once. The optimal computation results suggest that the municipal authorities only use 3 vehicles from 5 vehicles provided with the total minimum cost of IDR. 778,870. The computation time to search optimal route and minimal cost is very time consuming. This problem is influenced by the number of constraints and decision variables that have are integer value.

  10. Discrete bacteria foraging optimization algorithm for graph based problems - a transition from continuous to discrete

    NASA Astrophysics Data System (ADS)

    Sur, Chiranjib; Shukla, Anupam

    2018-03-01

    Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.

  11. Optimising energy recovery and use of chemicals, resources and materials in modern waste-to-energy plants.

    PubMed

    De Greef, J; Villani, K; Goethals, J; Van Belle, H; Van Caneghem, J; Vandecasteele, C

    2013-11-01

    Due to ongoing developments in the EU waste policy, Waste-to-Energy (WtE) plants are to be optimized beyond current acceptance levels. In this paper, a non-exhaustive overview of advanced technical improvements is presented and illustrated with facts and figures from state-of-the-art combustion plants for municipal solid waste (MSW). Some of the data included originate from regular WtE plant operation - before and after optimisation - as well as from defined plant-scale research. Aspects of energy efficiency and (re-)use of chemicals, resources and materials are discussed and support, in light of best available techniques (BAT), the idea that WtE plant performance still can be improved significantly, without direct need for expensive techniques, tools or re-design. In first instance, diagnostic skills and a thorough understanding of processes and operations allow for reclaiming the silent optimisation potential. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.

    PubMed

    Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D

    2011-12-12

    We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America

  13. Optimisation of composite metallic fuel for minor actinide transmutation in an accelerator-driven system

    NASA Astrophysics Data System (ADS)

    Uyttenhove, W.; Sobolev, V.; Maschek, W.

    2011-09-01

    A potential option for neutralization of minor actinides (MA) accumulated in spent nuclear fuel of light water reactors (LWRs) is their transmutation in dedicated accelerator-driven systems (ADS). A promising fuel candidate dedicated to MA transmutation is a CERMET composite with Mo metal matrix and (Pu, Np, Am, Cm)O 2-x fuel particles. Results of optimisation studies of the CERMET fuel targeting to increasing the MA transmutation efficiency of the EFIT (European Facility for Industrial Transmutation) core are presented. In the adopted strategy of MA burning the plutonium (Pu) balance of the core is minimized, allowing a reduction in the reactivity swing and the peak power form-factor deviation and an extension of the cycle duration. The MA/Pu ratio is used as a variable for the fuel optimisation studies. The efficiency of MA transmutation is close to the foreseen theoretical value of 42 kg TW -1 h -1 when level of Pu in the actinide mixture is about 40 wt.%. The obtained results are compared with the reference case of the EFIT core loaded with the composite CERCER fuel, where fuel particles are incorporated in a ceramic magnesia matrix. The results of this study offer additional information for the EFIT fuel selection.

  14. Microencapsulation Approach for Orally Extended Delivery of Glipizide: In vitro and in vivo Evaluation

    PubMed Central

    Abdelbary, A.; El-gendy, N. A.; Hosny, A.

    2012-01-01

    Glipizide is an effective antidiabetic agent, however, it suffers from relatively short biological half-life. To solve this encumbrance, it is a prospective candidate for fabricating glipizide extended release microcapsules. Microencapsulation of glipizde with a coat of alginate alone or in combination with chitosan or carbomer 934P was prepared employing ionotropic gelation process. The prepared microcapsules were evaluated in vitro by microscopical examination, determination of the particle size, yield and microencapsulation efficiency. The filled capsules were assessed for content uniformity and drug release characteristics. Stability study of the optimised formulas was carried out at three different temperatures over 12 weeks. In vivo bioavailability study and hypoglycemic activity of C9 microcapsules were done on albino rabbits. All formulas achieved high yield, microencapsulation efficiency and extended t1/2. C9 and C19 microcapsules attained the most optimised results in all tests and complied with the dissolution requirements for extended release dosage forms. These two formulas were selected for stability studies. C9 exhibited longer shelf-life and hence was chosen for in vivo studies. C9 microcapsules showed an improvement in the drug bioavailability and significant hypoglycemic activity compared to immediate release tablets (Minidiab® 5 mg). The optimised microcapsule formulation developed was found to produce extended antidiabetic activity. PMID:23626387

  15. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  16. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics

    PubMed Central

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics. PMID:26295151

  17. Improved packing of protein side chains with parallel ant colonies.

    PubMed

    Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie

    2014-01-01

    The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.

  18. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    NASA Astrophysics Data System (ADS)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  19. Development and characterisation of chitosan films impregnated with insulin loaded PEG-b-PLA nanoparticles (NPs): a potential approach for buccal delivery of macromolecules.

    PubMed

    Giovino, Concetta; Ayensu, Isaac; Tetteh, John; Boateng, Joshua S

    2012-05-30

    Mucoadhesive chitosan based films, incorporated with insulin loaded nanoparticles (NPs) made of poly(ethylene glycol)methyl ether-block-polylactide (PEG-b-PLA) have been developed and characterised. Blank-NPs were prepared by double emulsion solvent evaporation technique with varying concentrations of the copolymer (5 and 10%, w/v). The optimised formulation was loaded with insulin (model protein) at initial loadings of 2, 5 and 10% with respect to copolymer weight. The developed NPs were analysed for size, size distribution, surface charge, morphology, encapsulation efficiency and drug release. NPs showing negative (ζ)-potential (<-6 mV) with average diameter> 300 nm and a polydispersity index (P.I.) of ≈ 0.2, irrespective of formulation process, were achieved. Insulin encapsulation efficiencies of 70% and 30% for NPs-Insulin-2 and NPs-Insulin-5 were obtained, respectively. The in vitro release behaviour of both formulations showed a classic biphasic sustained release of protein over 5 weeks which was influenced by pH of the release medium. Optimised chitosan films embedded with 3mg of insulin loaded NPs were produced by solvent casting with homogeneous distribution of NPs in the mucoadhesive matrix, which displayed excellent physico-mechanical properties. The drug delivery system has been designed as a novel platform for potential buccal delivery of macromolecules. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Optimisation of the imaging and dosimetric characteristics of an electronic portal imaging device employing plastic scintillating fibres using Monte Carlo simulations.

    PubMed

    Blake, S J; McNamara, A L; Vial, P; Holloway, L; Kuncic, Z

    2014-11-21

    A Monte Carlo model of a novel electronic portal imaging device (EPID) has been developed using Geant4 and its performance for imaging and dosimetry applications in radiotherapy has been characterised. The EPID geometry is based on a physical prototype under ongoing investigation and comprises an array of plastic scintillating fibres in place of the metal plate/phosphor screen in standard EPIDs. Geometrical and optical transport parameters were varied to investigate their impact on imaging and dosimetry performance. Detection efficiency was most sensitive to variations in fibre length, achieving a peak value of 36% at 50 mm using 400 keV x-rays for the lengths considered. Increases in efficiency for longer fibres were partially offset by reductions in sensitivity. Removing the extra-mural absorber surrounding individual fibres severely decreased the modulation transfer function (MTF), highlighting its importance in maximising spatial resolution. Field size response and relative dose profile simulations demonstrated a water-equivalent dose response and thus the prototype's suitability for dosimetry applications. Element-to-element mismatch between scintillating fibres and underlying photodiode pixels resulted in a reduced MTF for high spatial frequencies and quasi-periodic variations in dose profile response. This effect is eliminated when fibres are precisely matched to underlying pixels. Simulations strongly suggest that with further optimisation, this prototype EPID may be capable of simultaneous imaging and dosimetry in radiotherapy.

  1. Digital radiography: are the manufacturers' settings too high? Optimisation of the Kodak digital radiography system with aid of the computed radiography dose index.

    PubMed

    Peters, Sinead E; Brennan, Patrick C

    2002-09-01

    Manufacturers offer exposure indices as a safeguard against overexposure in computed radiography, but the basis for recommended values is unclear. This study establishes an optimum exposure index to be used as a guideline for a specific CR system to minimise radiation exposures for computed mobile chest radiography, and compares this with manufacturer guidelines and current practice. An anthropomorphic phantom was employed to establish the minimum milliamperes consistent with acceptable image quality for mobile chest radiography images. This was found to be 2 mAs. Consecutively, 10 patients were exposed with this optimised milliampere value and 10 patients were exposed with the 3.2 mAs routinely used in the department of the study. Image quality was objectively assessed using anatomical criteria. Retrospective analyses of 717 exposure indices recorded over 2 months from mobile chest examinations were performed. The optimised milliampere value provided a significant reduction of the average exposure index from 1840 to 1570 ( p<0.0001). This new "optimum" exposure index is substantially lower than manufacturer guidelines of 2000 and significantly lower than exposure indices from the retrospective study (1890). Retrospective data showed a significant increase in exposure indices if the examination was performed out of hours. The data provided by this study emphasise the need for clinicians and personnel to consider establishing their own optimum exposure indices for digital investigations rather than simply accepting manufacturers' guidelines. Such an approach, along with regular monitoring of indices, may result in a substantial reduction in patient exposure.

  2. Ex vivo optimisation of a heterogeneous speed of sound model of the human skull for non-invasive transcranial focused ultrasound at 1 MHz.

    PubMed

    Marsac, L; Chauvet, D; La Greca, R; Boch, A-L; Chaumoitre, K; Tanter, M; Aubry, J-F

    2017-09-01

    Transcranial brain therapy has recently emerged as a non-invasive strategy for the treatment of various neurological diseases, such as essential tremor or neurogenic pain. However, treatments require millimetre-scale accuracy. The use of high frequencies (typically ≥1 MHz) decreases the ultrasonic wavelength to the millimetre scale, thereby increasing the clinical accuracy and lowering the probability of cavitation, which improves the safety of the technique compared with the use of low-frequency devices that operate at 220 kHz. Nevertheless, the skull produces greater distortions of high-frequency waves relative to low-frequency waves. High-frequency waves require high-performance adaptive focusing techniques, based on modelling the wave propagation through the skull. This study sought to optimise the acoustical modelling of the skull based on computed tomography (CT) for a 1 MHz clinical brain therapy system. The best model tested in this article corresponded to a maximum speed of sound of 4000 m.s -1 in the skull bone, and it restored 86% of the optimal pressure amplitude on average in a collection of six human skulls. Compared with uncorrected focusing, the optimised non-invasive correction led to an average increase of 99% in the maximum pressure amplitude around the target and an average decrease of 48% in the distance between the peak pressure and the selected target. The attenuation through the skulls was also assessed within the bandwidth of the transducers, and it was found to vary in the range of 10 ± 3 dB at 800 kHz and 16 ± 3 dB at 1.3 MHz.

  3. Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks

    DTIC Science & Technology

    2015-04-01

    UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace

  4. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  5. ICRP publication 121: radiological protection in paediatric diagnostic and interventional radiology.

    PubMed

    Khong, P-L; Ringertz, H; Donoghue, V; Frush, D; Rehani, M; Appelgate, K; Sanchez, R

    2013-04-01

    Paediatric patients have a higher average risk of developing cancer compared with adults receiving the same dose. The longer life expectancy in children allows more time for any harmful effects of radiation to manifest, and developing organs and tissues are more sensitive to the effects of radiation. This publication aims to provide guiding principles of radiological protection for referring clinicians and clinical staff performing diagnostic imaging and interventional procedures for paediatric patients. It begins with a brief description of the basic concepts of radiological protection, followed by the general aspects of radiological protection, including principles of justification and optimisation. Guidelines and suggestions for radiological protection in specific modalities - radiography and fluoroscopy, interventional radiology, and computed tomography - are subsequently covered in depth. The report concludes with a summary and recommendations. The importance of rigorous justification of radiological procedures is emphasised for every procedure involving ionising radiation, and the use of imaging modalities that are non-ionising should always be considered. The basic aim of optimisation of radiological protection is to adjust imaging parameters and institute protective measures such that the required image is obtained with the lowest possible dose of radiation, and that net benefit is maximised to maintain sufficient quality for diagnostic interpretation. Special consideration should be given to the availability of dose reduction measures when purchasing new imaging equipment for paediatric use. One of the unique aspects of paediatric imaging is with regards to the wide range in patient size (and weight), therefore requiring special attention to optimisation and modification of equipment, technique, and imaging parameters. Examples of good radiographic and fluoroscopic technique include attention to patient positioning, field size and adequate collimation, use of protective shielding, optimisation of exposure factors, use of pulsed fluoroscopy, limiting fluoroscopy time, etc. Major paediatric interventional procedures should be performed by experienced paediatric interventional operators, and a second, specific level of training in radiological protection is desirable (in some countries, this is mandatory). For computed tomography, dose reduction should be optimised by the adjustment of scan parameters (such as mA, kVp, and pitch) according to patient weight or age, region scanned, and study indication (e.g. images with greater noise should be accepted if they are of sufficient diagnostic quality). Other strategies include restricting multiphase examination protocols, avoiding overlapping of scan regions, and only scanning the area in question. Up-to-date dose reduction technology such as tube current modulation, organ-based dose modulation, auto kV technology, and iterative reconstruction should be utilised when appropriate. It is anticipated that this publication will assist institutions in encouraging the standardisation of procedures, and that it may help increase awareness and ultimately improve practices for the benefit of patients. Copyright © 2012. Published by Elsevier Ltd.

  6. Orbital optimisation in the perfect pairing hierarchy: applications to full-valence calculations on linear polyacenes

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2018-03-01

    We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.

  7. Development and validation of real-time simulation of X-ray imaging with respiratory motion.

    PubMed

    Vidal, Franck P; Villard, Pierre-Frédéric

    2016-04-01

    We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Midbond basis functions for weakly bound complexes

    NASA Astrophysics Data System (ADS)

    Shaw, Robert A.; Hill, J. Grant

    2018-06-01

    Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.

  9. Development and Application of a Process-based River System Model at a Continental Scale

    NASA Astrophysics Data System (ADS)

    Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.

    2014-12-01

    Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.

  10. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  11. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Optimisation of intradermal DNA electrotransfer for immunisation.

    PubMed

    Vandermeulen, Gaëlle; Staes, Edith; Vanderhaeghen, Marie Lise; Bureau, Michel Francis; Scherman, Daniel; Préat, Véronique

    2007-12-04

    The development of DNA vaccines requires appropriate delivery technologies. Electrotransfer is one of the most efficient methods of non-viral gene transfer. In the present study, intradermal DNA electrotransfer was first optimised. Strong effects of the injection method and the dose of DNA on luciferase expression were demonstrated. Pre-treatments were evaluated to enhance DNA diffusion in the skin but neither hyaluronidase injection nor iontophoresis improved efficiency of intradermal DNA electrotransfer. Then, DNA immunisation with a weakly immunogenic model antigen, luciferase, was investigated. After intradermal injection of the plasmid encoding luciferase, electrotransfer (HV 700 V/cm 100 micros, LV 200 V/cm 400 ms) was required to induce immune response. The response was Th1-shifted compared to immunisation with the luciferase recombinant protein. Finally, DNA electrotransfer in the skin, the muscle or the ear pinna was compared. Muscle DNA electrotransfer resulted in the highest luciferase expression and the best IgG response. Nevertheless electrotransfer into the skin, the muscle and the ear pinna all resulted in IFN-gamma secretion by luciferase-stimulated splenocytes suggesting that an efficient Th1 response was induced in all case.

  13. Design and optimisation of novel configurations of stormwater constructed wetlands

    NASA Astrophysics Data System (ADS)

    Kiiza, Christopher

    2017-04-01

    Constructed wetlands (CWs) are recognised as a cost-effective technology for wastewater treatment. CWs have been deployed and could be retrofitted into existing urban drainage systems to prevent surface water pollution, attenuate floods and act as sources for reusable water. However, there exist numerous criteria for design configuration and operation of CWs. The aim of the study was to examine effects of design and operational variables on performance of CWs. To achieve this, 8 novel designs of vertical flow CWs were continuously operated and monitored (weekly) for 2years. Pollutant removal efficiency in each CW unit was evaluated from physico-chemical analyses of influent and effluent water samples. Hybrid optimised multi-layer perceptron artificial neural networks (MLP ANNs) were applied to simulate treatment efficiency in the CWs. Subsequently, predictive and analytical models were developed for each design unit. Results show models have sound generalisation abilities; with various design configurations and operational variables influencing performance of CWs. Although some design configurations attained faster and higher removal efficiencies than others; all 8 CW designs produced effluents permissible for discharge into watercourses with strict regulatory standards.

  14. Efficient photoassociation of ultracold cesium atoms with picosecond pulse laser

    NASA Astrophysics Data System (ADS)

    Hai, Yang; Hu, Xue-Jin; Li, Jing-Lun; Cong, Shu-Lin

    2017-08-01

    We investigate theoretically the formation of ultracold Cs2 molecules via photoassociation (PA) with three kinds of pulses (the Gaussian pulse, the asymmetric shaped laser pulse SL1 with a large rising time and a small falling time and the asymmetric shaped laser pulse SL2 with a small rising time and a large falling time). For the three kinds of pulses, the final population on vibrational levels from v‧ = 120 to 175 of the excited state displays a regular oscillation change with pulse width and interaction strength, and a high PA efficiency can be achieved with optimised parameters. The PA efficiency in the excited state steered by the SL1-pulse (SL2-pulse) train with optimised parameters which is composed of four SL1 (SL2) pulses is 1.74 times as much as that by the single SL1 (SL2) pulse due to the population accumulation effect. Moreover, a dump laser is employed to transfer the excited molecules from the excited state to the vibrational level v″ = 12 of the ground state to obtain stable molecules.

  15. Global nuclear industry views: challenges arising from the evolution of the optimisation principle in radiological protection.

    PubMed

    Saint-Pierre, S

    2012-01-01

    Over the last few decades, the steady progress achieved in reducing planned exposures of both workers and the public has been admirable in the nuclear sector. However, the disproportionate focus on tiny public exposures and radioactive discharges associated with normal operations came at a high price, and the quasi-denial of a risk of major accident and related weaknesses in emergency preparedness and response came at an even higher price. Fukushima has unfortunately taught us that radiological protection (RP) for emergency and post-emergency situations can be much more than a simple evacuation that lasts 24-48 h, with people returning safely to their homes soon afterwards. On optimisation of emergency and post-emergency exposures, the only 'show in town' in terms of international RP policy improvements has been the issuance of the 2007 Recommendations of the International Commission on Radiological Protection (ICRP). However, no matter how genuine these improvements are, they have not been 'road tested' on the practical reality of severe accidents. Post-Fukushima, there is a compelling case to review the practical adequacy of key RP notions such as optimisation, evacuation, sheltering, and reference levels for workers and the public, and to amend these notions with a view to making the international RP system more useful in the event of a severe accident. On optimisation of planned exposures, the reality is that, nowadays, margins for further reductions of public doses in the nuclear sector are very small, and the smaller the dose, the greater the extra effort needed to reduce the dose further. If sufficient caution is not exercised in the use of RP notions such as dose constraints, there is a real risk of challenging nuclear power technologies beyond safety reasons. For nuclear new build, it is the optimisation of key operational parameters of nuclear power technologies (not RP) that is of paramount importance to improve their overall efficiency. In pursuing further improvements in the international RP system, it should be clearly borne in mind that the system is generally based on protection against the risk of cancer and hereditary diseases. The system also protects against deterministic non-cancer effects on tissues and organs. In seeking refinements of such protective notions, ICRP is invited to pay increased attention to the fact that a continued balance must be struck between beneficial activities that cause exposures and protection. The global nuclear industry is committed to help overcome these key RP issues as part of the RP community's upcoming international deliberations towards a more efficient international RP system. Copyright © 2012. Published by Elsevier Ltd.

  16. Pulsed source of spectrally uncorrelated and indistinguishable photons at telecom wavelengths.

    PubMed

    Bruno, N; Martin, A; Guerreiro, T; Sanguinetti, B; Thew, R T

    2014-07-14

    We report on the generation of indistinguishable photon pairs at telecom wavelengths based on a type-II parametric down conversion process in a periodically poled potassium titanyl phosphate (PPKTP) crystal. The phase matching, pump laser characteristics and coupling geometry are optimised to obtain spectrally uncorrelated photons with high coupling efficiencies. Four photons are generated by a counter-propagating pump in the same crystal and anlysed via two photon interference experiments between photons from each pair source as well as joint spectral and g((2)) measurements. We obtain a spectral purity of 0.91 and coupling efficiencies around 90% for all four photons without any filtering. These pure indistinguishable photon sources at telecom wavelengths are perfectly adapted for quantum network demonstrations and other multi-photon protocols.

  17. Analysis and design of high-power and efficient, millimeter-wave power amplifier systems using zero degree combiners

    NASA Astrophysics Data System (ADS)

    Tai, Wei; Abbasi, Mortez; Ricketts, David S.

    2018-01-01

    We present the analysis and design of high-power millimetre-wave power amplifier (PA) systems using zero-degree combiners (ZDCs). The methodology presented optimises the PA device sizing and the number of combined unit PAs based on device load pull simulations, driver power consumption analysis and loss analysis of the ZDC. Our analysis shows that an optimal number of N-way combined unit PAs leads to the highest power-added efficiency (PAE) for a given output power. To illustrate our design methodology, we designed a 1-W PA system at 45 GHz using a 45 nm silicon-on-insulator process and showed that an 8-way combined PA has the highest PAE that yields simulated output power of 30.6 dBm and 31% peak PAE.

  18. Molecular imprinting solid phase extraction for selective detection of methidathion in olive oil.

    PubMed

    Bakas, Idriss; Oujji, Najwa Ben; Moczko, Ewa; Istamboulie, Georges; Piletsky, Sergey; Piletska, Elena; Ait-Ichou, Ihya; Ait-Addi, Elhabib; Noguer, Thierry; Rouillon, Régis

    2012-07-13

    A specific adsorbent for extraction of methidathion from olive oil was developed. The design of the molecularly imprinted polymer (MIP) was based on the results of the computational screening of the library of polymerisable functional monomers. MIP was prepared by thermal polymerisation using N,N'-methylene bisacrylamide (MBAA) as a functional monomer and ethylene glycol dimethacrylate (EGDMA) as a cross-linker. The polymers based on the itaconic acid (IA), methacrylic acid (MAA) and 2-(trifluoromethyl)acryl acid (TFMAA) functional monomers and one control polymer which was made without functional monomers with cross-linker EGDMA were also synthesised and tested. The performance of each polymer was compared using corresponding imprinting factor. As it was predicted by molecular modelling the best results were obtained for the MIP prepared with MBAA. The obtained MIP was optimised in solid-phase extraction coupled with high performance liquid chromatography (MISPE-HPLC-UV) and tested for the rapid screening of methidathion in olive oil. The proposed method allowed the efficient extraction of methidathion for concentrations ranging from 0.1 to 9 mg L(-1) (r(2)=0.996). The limits of detection (LOD) and quantification (LOQ) in olive oil were 0.02 mg L(-1) and 0.1 mg L(-1), respectively. MIPs extraction was much more effective than traditional C18 reverse-phase solid phase extraction. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy

    ERIC Educational Resources Information Center

    Pani, John R.; Chariker, Julia H.; Naaz, Farah

    2013-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously…

  20. Hollow fibre-based liquid phase microextraction combined with high-performance liquid chromatography for the analysis of flavonoids in Echinophora platyloba DC. and Mentha piperita.

    PubMed

    Hadjmohammadi, Mohammadreza; Karimiyan, Hanieh; Sharifi, Vahid

    2013-11-15

    A simple, inexpensive and efficient three phase hollow fibre liquid phase microextraction (HF-LPME) technique combined with HPLC was used for the simultaneous determination of flavonoids in Echinophora platyloba DC. and Mentha piperita. Different factors affecting the HF-LPME procedure were investigated and optimised. The optimised extraction conditions were as follows: 1-octanol as an organic solvent, pHdonor=2, pHacceptor=9.75, stirring rate of 1000rpm, extraction time of 80min, without addition of salt. Under these conditions, the enrichment factors ranged between 146 and 311. The values of intra and inter-day relative standard deviations (RSD) were in the range of 3.18-6.00% and 7.25-11.00%, respectively. The limits of detection (LODs) ranged between 0.5 and 7.0ngmL(-1). Among the investigated flavonoids quercetin was found in E. platyloba DC. and luteolin was found in M. piperita. Concentration of quercetin and luteolin was 0.015 and 0.025mgg(-1) respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  2. PyEvolve: a toolkit for statistical modelling of molecular evolution.

    PubMed

    Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A

    2004-01-05

    Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.

  3. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  4. Optimisation of cavity parameters for lasers based on AlGaInAsP/InP solid solutions (λ = 1470 nm)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veselov, D A; Ayusheva, K R; Shashkin, I S

    2015-10-31

    We have studied the effect of laser cavity parameters on the light–current characteristics of lasers based on the AlGaInAs/GaInAsP/InP solid solution system that emit in the spectral range 1400 – 1600 nm. It has been shown that optimisation of cavity parameters (chip length and front facet reflectivity) allows one to improve heat removal from the laser, without changing other laser characteristics. An increase in the maximum output optical power of the laser by 0.5 W has been demonstrated due to cavity design optimisation. (lasers)

  5. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  6. A CONCEPTUAL FRAMEWORK FOR MANAGING RADIATION DOSE TO PATIENTS IN DIAGNOSTIC RADIOLOGY USING REFERENCE DOSE LEVELS.

    PubMed

    Almén, Anja; Båth, Magnus

    2016-06-01

    The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Synthesis and characterisation of PEG modified chitosan nanocapsules loaded with thymoquinone.

    PubMed

    Vignesh Kumar, Suresh Kumar; Renuka Devi, Ponnuswamy; Harish, Saru; Hemananthan, Eswaran

    2017-02-01

    Thymoquinone (TQ), a major bioactive compound of Nigella sativa seeds has several therapeutic properties. The main drawback in bringing TQ to therapeutic application is that it has poor stability and bioavailability. Hence a suitable carrier is essential for TQ delivery. Recent studies indicate biodegradable polymers are potentially good carriers of bioactive compounds. In this study, polyethylene glycol (PEG) modified chitosan (Cs) nanocapsules were developed as a carrier for TQ. Aqueous soluble low molecular weight Cs and PEG was selected among different biodegradable polymers based on their biocompatibility and efficacy as a carrier. Optimisation of synthesis of nanocapsules was done based on particle size, PDI, encapsulation efficiency and process yield. A positive zeta potential value of +48 mV, indicating good stability was observed. Scanning electron microscope and atomic-force microscopy analysis revealed spherical shaped and smooth surfaced nanocapsules with size between 100 to 300 nm. The molecular dispersion of the TQ in Cs PEG nanocapsules was studied using X-ray powder diffraction. The Fourier transform infrared spectrum of optimised nanocapsule exhibited functional groups of both polymer and drug, confirming the presence of Cs, PEG and TQ. In vitro drug release studies showed that PEG modified Cs nanocapsules loaded with TQ had a slow and sustained release.

  8. Better powder diffractometers. II—Optimal choice of U, V and W

    NASA Astrophysics Data System (ADS)

    Cussen, L. D.

    2007-12-01

    This article presents a technique for optimising constant wavelength (CW) neutron powder diffractometers (NPDs) using conventional nonlinear least squares methods. This is believed to be the first such design optimisation for a neutron spectrometer. The validity of this approach and discussion should extend beyond the Gaussian element approximation used and also to instruments using different radiation, such as X-rays. This approach could later be extended to include vertical and perhaps horizontal focusing monochromators and probably other types of instruments such as three axis spectrometers. It is hoped that this approach will help in comparisons of CW and time-of-flight (TOF) instruments. Recent work showed that many different beam element combinations can give identical resolution on CW NPDs and presented a procedure to find these combinations and also find an "optimum" choice of detector collimation. Those results enable the previous redundancy in the description of instrument performance to be removed and permit a least squares optimisation of design. New inputs are needed and are identified as the sample plane spacing ( dS) of interest in the measurement. The optimisation requires a "quality factor", QPD, chosen here to be minimising the worst Bragg peak separation ability over some measurement range ( dS) while maintaining intensity. Any other QPD desired could be substituted. It is argued that high resolution and high intensity powder diffractometers (HRPDs and HIPDs) should have similar designs adjusted by a single scaling factor. Simulated comparisons are described suggesting significant improvements in performance for CW HIPDs. Optimisation with unchanged wavelength suggests improvements by factors of about 2 for HRPDs and 25 for HIPDs. A recently quantified design trade-off between the maximum line intensity possible and the degree of variation of angular resolution over the scattering angle range leads to efficiency gains at short wavelengths. This in turn leads in practice to another trade-off between this efficiency gain and losses at short wavelength due to technical effects. The exact gains from varying wavelength depend on the details of the short wavelength technical losses. Simulations suggest that the total potential PD performance gains may be very significant-factors of about 3 for HRPDs and more than 90 for HIPDs.

  9. Improved visualisation of early cerebral infarctions after endovascular stroke therapy using dual-energy computed tomography oedema maps.

    PubMed

    Grams, Astrid Ellen; Djurdjevic, Tanja; Rehwald, Rafael; Schiestl, Thomas; Dazinger, Florian; Steiger, Ruth; Knoflach, Michael; Gizewski, Elke Ruth; Glodny, Bernhard

    2018-05-04

    The aim was to investigate whether dual-energy computed tomography (DECT) reconstructions optimised for oedema visualisation (oedema map; EM) facilitate an improved detection of early infarctions after endovascular stroke therapy (EST). Forty-six patients (21 women; 25 men; mean age: 63 years; range 24-89 years) were included. The brain window (BW), virtual non-contrast (VNC) and modified VNC series based on a three-material decomposition technique optimised for oedema visualisation (EM) were evaluated. Follow-up imaging was used as the standard for comparison. Contralateral side to infarction differences in density (CIDs) were determined. Infarction detectability was assessed by two blinded readers, as well as image noise and contrast using Likert scales. ROC analyses were performed and the respective Youden indices calculated for cut-off analysis. The highest CIDs were found in the EM series (73.3 ± 49.3 HU), compared with the BW (-1.72 ± 13.29 HU) and the VNC (8.30 ± 4.74 HU) series. The EM was found to have the highest infarction detection rates (area under the curve: 0.97 vs. 0.54 and 0.90, p < 0.01) with a cut-off value of < 50.7 HU, despite slightly more pronounced image noise. The location of the infarction did not affect detectability (p > 0.05 each). The EM series allows higher contrast and better early infarction detection than the VNC or BW series after EST. • Dual-energy CT EM allows better early infarction detection than standard brain window. • Dual-energy CT EM series allow better early infarction detection than VNC series. • Dual-energy CT EM are modified VNC based on water content of tissue.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  11. Moss and peat hydraulic properties are optimized to maximise peatland water use efficiency

    NASA Astrophysics Data System (ADS)

    Kettridge, Nicholas; Tilak, Amey; Devito, Kevin; Petrone, Rich; Mendoza, Carl; Waddington, Mike

    2016-04-01

    Peatland ecosystems are globally important carbon and terrestrial surface water stores that have formed over millennia. These ecosystems have likely optimised their ecohydrological function over the long-term development of their soil hydraulic properties. Through a theoretical ecosystem approach, applying hydrological modelling integrated with known ecological thresholds and concepts, the optimisation of peat hydraulic properties is examined to determine which of the following conditions peatland ecosystems target during this development: i) maximise carbon accumulation, ii) maximise water storage, or iii) balance carbon profit across hydrological disturbances. Saturated hydraulic conductivity (Ks) and empirical van Genuchten water retention parameter α are shown to provide a first order control on simulated water tensions. Across parameter space, peat profiles with hypothetical combinations of Ks and α show a strong binary tendency towards targeting either water or carbon storage. Actual hydraulic properties from five northern peatlands fall at the interface between these goals, balancing the competing demands of carbon accumulation and water storage. We argue that peat hydraulic properties are thus optimized to maximise water use efficiency and that this optimisation occurs over a centennial to millennial timescale as the peatland develops. This provides a new conceptual framework to characterise peat hydraulic properties across climate zones and between a range of different disturbances, and which can be used to provide benchmarks for peatland design and reclamation.

  12. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    NASA Astrophysics Data System (ADS)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  13. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  14. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  15. Preparation and Optimisation of Cross-Linked Enzyme Aggregates Using Native Isolate White Rot Fungi Trametes versicolor and Fomes fomentarius for the Decolourisation of Synthetic Dyes

    PubMed Central

    Vršanská, Martina; Voběrková, Stanislava; Jiménez Jiménez, Ana María; Strmiska, Vladislav; Adam, Vojtěch

    2017-01-01

    The key to obtaining an optimum performance of an enzyme is often a question of devising a suitable enzyme and optimisation of conditions for its immobilization. In this study, laccases from the native isolates of white rot fungi Fomes fomentarius and/or Trametes versicolor, obtained from Czech forests, were used. From these, cross-linked enzyme aggregates (CLEA) were prepared and characterised when the experimental conditions were optimized. Based on the optimization steps, saturated ammonium sulphate solution (75 wt.%) was used as the precipitating agent, and different concentrations of glutaraldehyde as a cross-linking agent were investigated. CLEA aggregates formed under the optimal conditions showed higher catalytic efficiency and stabilities (thermal, pH, and storage, against denaturation) as well as high reusability compared to free laccase for both fungal strains. The best concentration of glutaraldehyde seemed to be 50 mM and higher efficiency of cross-linking was observed at a low temperature 4 °C. An insignificant increase in optimum pH for CLEA laccases with respect to free laccases for both fungi was observed. The results show that the optimum temperature for both free laccase and CLEA laccase was 35 °C for T. versicolor and 30 °C for F. fomentarius. The CLEAs retained 80% of their initial activity for Trametes and 74% for Fomes after 70 days of cultivation. Prepared cross-linked enzyme aggregates were also investigated for their decolourisation activity on malachite green, bromothymol blue, and methyl red dyes. Immobilised CLEA laccase from Trametes versicolor showed 95% decolourisation potential and CLEA from Fomes fomentarius demonstrated 90% decolourisation efficiency within 10 h for all dyes used. These results suggest that these CLEAs have promising potential in dye decolourisation. PMID:29295505

  16. Coagulation kinetics beyond mean field theory using an optimised Poisson representation.

    PubMed

    Burnett, James; Ford, Ian J

    2015-05-21

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable "gauge" transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  17. A waste characterisation procedure for ADM1 implementation based on degradation kinetics.

    PubMed

    Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F

    2012-09-01

    In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Semantic distance as a critical factor in icon design for in-car infotainment systems.

    PubMed

    Silvennoinen, Johanna M; Kujala, Tuomo; Jokinen, Jussi P P

    2017-11-01

    In-car infotainment systems require icons that enable fluent cognitive information processing and safe interaction while driving. An important issue is how to find an optimised set of icons for different functions in terms of semantic distance. In an optimised icon set, every icon needs to be semantically as close as possible to the function it visually represents and semantically as far as possible from the other functions represented concurrently. In three experiments (N = 21 each), semantic distances of 19 icons to four menu functions were studied with preference rankings, verbal protocols, and the primed product comparisons method. The results show that the primed product comparisons method can be efficiently utilised for finding an optimised set of icons for time-critical applications out of a larger set of icons. The findings indicate the benefits of the novel methodological perspective into the icon design for safety-critical contexts in general. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Systematic development of design of experiments (DoE) optimised self-microemulsifying drug delivery system of Zotepine.

    PubMed

    Dalvadi, Hitesh; Patel, Nikita; Parmar, Komal

    2017-05-01

    The aim of present investigation is to improve dissolution rate of poor soluble drug Zotepine by a self-microemulsifying drug delivery system (SMEDDS). Ternary phase diagram with oil (Oleic acid), surfactant (Tween 80) and co-surfactant (PEG 400) at apex were used to identify the efficient self-microemulsifying region. Box-Behnken design was implemented to study the influence of independent variables. Principal Component Analysis was used for scrutinising critical variables. The liquid SMEDDS were characterised for macroscopic evaluation, % Transmission, emulsification time and in vitro drug release studies. Optimised formulation OL1 was converted in to S-SMEDDS by using Aerosil ® 200 as an adsorbent in the ratio of 3:1. The S-SMEDDS was characterised by SEM, DSC, globule size (152.1 nm), zeta-potential (-28.1 mV), % transmission study (98.75%), in vitro release (86.57%) at 30 min. The optimised solid SMEDDS formulation showed faster drug release properties as compared to conventional tablet of Zotepine.

  20. Optimisation on pretreatment of rubber seed (Hevea brasiliensis) oil via esterification reaction in a hydrodynamic cavitation reactor.

    PubMed

    Bokhari, Awais; Chuah, Lai Fatt; Yusup, Suzana; Klemeš, Jiří Jaromír; Kamil, Ruzaimah Nik M

    2016-01-01

    Pretreatment of the high free fatty acid rubber seed oil (RSO) via esterification reaction has been investigated by using a pilot scale hydrodynamic cavitation (HC) reactor. Four newly designed orifice plate geometries are studied. Cavities are induced by assisted double diaphragm pump in the range of 1-3.5 bar inlet pressure. An optimised plate with 21 holes of 1mm diameter and inlet pressure of 3 bar resulted in RSO acid value reduction from 72.36 to 2.64 mg KOH/g within 30 min of reaction time. Reaction parameters have been optimised by using response surface methodology and found as methanol to oil ratio of 6:1, catalyst concentration of 8 wt%, reaction time of 30 min and reaction temperature of 55°C. The reaction time and esterified efficiency of HC was three fold shorter and four fold higher than mechanical stirring. This makes the HC process more environmental friendly. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. High-throughput assay for optimising microbial biological control agent production and delivery

    USDA-ARS?s Scientific Manuscript database

    Lack of technologies to produce and deliver effective biological control agents (BCAs) is a major barrier to their commercialization. A myriad of variables associated with BCA cultivation, formulation, drying, storage, and reconstitution processes complicates agent quality maximization. An efficie...

  2. Optimising productivity, quality and efficiency in community nursing.

    PubMed

    Holland, Agi; McIntosh, Brian

    2012-08-01

    By 2014 the NHS is expected to make £21 billion in efficiency savings and increase productivity by 6% per annum, while maintaining or improving the quality of care. Given that the cost of the 1.7 million strong workforce represents 60% of the NHS budget, changes are likely. This context of innovation and cost-effectiveness has resulted in an ever greater emphasis to fully engage and support community nursing.

  3. New Trends in Forging Technologies

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Hagen, T.; Knigge, J.; Elgaly, I.; Hadifi, T.; Bouguecha, A.

    2011-05-01

    Limited natural resources increase the demand on highly efficient machinery and transportation means. New energy-saving mobility concepts call for design optimisation through downsizing of components and choice of corrosion resistant materials possessing high strength to density ratios. Component downsizing can be performed either by constructive structural optimisation or by substituting heavy materials with lighter high-strength ones. In this context, forging plays an important role in manufacturing load-optimised structural components. At the Institute of Metal Forming and Metal-Forming Machines (IFUM) various innovative forging technologies have been developed. With regard to structural optimisation, different strategies for localised reinforcement of components were investigated. Locally induced strain hardening by means of cold forging under a superimposed hydrostatic pressure could be realised. In addition, controlled martensitic zones could be created through forming induced phase conversion in metastable austenitic steels. Other research focused on the replacement of heavy steel parts with high-strength nonferrous alloys or hybrid material compounds. Several forging processes of magnesium, aluminium and titanium alloys for different aeronautical and automotive applications were developed. The whole process chain from material characterisation via simulation-based process design to the production of the parts has been considered. The feasibility of forging complex shaped geometries using these alloys was confirmed. In spite of the difficulties encountered due to machine noise and high temperature, acoustic emission (AE) technique has been successfully applied for online monitoring of forging defects. New AE analysis algorithm has been developed, so that different signal patterns due to various events such as product/die cracking or die wear could be detected and classified. Further, the feasibility of the mentioned forging technologies was proven by means of the finite element analysis (FEA). For example, the integrity of forging dies with respect to crack initiation due to thermo-mechanical fatigue as well as the ductile damage of forgings was investigated with the help of cumulative damage models. In this paper some of the mentioned approaches are described.

  4. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.

    PubMed

    Waterfall, C M; Cobb, B D

    2001-12-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.

  5. Solving multi-objective water management problems using evolutionary computation.

    PubMed

    Lewis, A; Randall, M

    2017-12-15

    Water as a resource is becoming increasingly more valuable given the changes in global climate. In an agricultural sense, the role of water is vital to ensuring food security. Therefore the management of it has become a subject of increasing attention and the development of effective tools to support participative decision-making in water management will be a valuable contribution. In this paper, evolutionary computation techniques and Pareto optimisation are incorporated in a model-based system for water management. An illustrative test case modelling optimal crop selection across dry, average and wet years based on data from the Murrumbidgee Irrigation Area in Australia is presented. It is shown that sets of trade-off solutions that provide large net revenues, or minimise environmental flow deficits can be produced rapidly, easily and automatically. The system is capable of providing detailed information on optimal solutions to achieve desired outcomes, responding to a variety of factors including climate conditions and economics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  7. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  8. Polyethylene glycol-based ultrasound-assisted extraction of magnolol and honokiol from Cortex Magnoliae Officinalis.

    PubMed

    He, Lei; Fan, Tao; Hu, Jianguo; Zhang, Lijin

    2015-01-01

    In this study, a kind of green solvent named polyethylene glycol (PEG) was developed for the ultrasound-assisted extraction (UAE) of magnolol and honokiol from Cortex Magnoliae Officinalis. The effects of PEG molecular weight, PEG concentration, sample size, pH, ultrasonic power and extraction time on the extraction of magnolol and honokiol were investigated to optimise the extraction conditions. Under the optimal extraction conditions, the PEG-based UAE supplied higher extraction efficiencies of magnolol and honokiol than the ethanol-based UAE and traditional ethanol-reflux extraction. Furthermore, the correlation coefficient (R(2)), repeatability (relative standard deviation, n = 6) and recovery confirmed the validation of the proposed extraction method, which were 0.9993-0.9996, 3.1-4.6% and 92.3-106.8%, respectively.

  9. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  10. Time and Learning Efficiency in Internet-Based Learning: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Cook, David A.; Levinson, Anthony J.; Garside, Sarah

    2010-01-01

    Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. Objectives Determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based…

  11. Stochastic optimisation of water allocation on a global scale

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.

    2014-05-01

    Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.

  12. Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.

    PubMed

    Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne

    2017-01-01

    Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.

  13. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  14. PWHATSHAP: efficient haplotyping for future generation sequencing.

    PubMed

    Bracciali, Andrea; Aldinucci, Marco; Patterson, Murray; Marschall, Tobias; Pisanti, Nadia; Merelli, Ivan; Torquati, Massimo

    2016-09-22

    Haplotype phasing is an important problem in the analysis of genomics information. Given a set of DNA fragments of an individual, it consists of determining which one of the possible alleles (alternative forms of a gene) each fragment comes from. Haplotype information is relevant to gene regulation, epigenetics, genome-wide association studies, evolutionary and population studies, and the study of mutations. Haplotyping is currently addressed as an optimisation problem aiming at solutions that minimise, for instance, error correction costs, where costs are a measure of the confidence in the accuracy of the information acquired from DNA sequencing. Solutions have typically an exponential computational complexity. WHATSHAP is a recent optimal approach which moves computational complexity from DNA fragment length to fragment overlap, i.e., coverage, and is hence of particular interest when considering sequencing technology's current trends that are producing longer fragments. Given the potential relevance of efficient haplotyping in several analysis pipelines, we have designed and engineered PWHATSHAP, a parallel, high-performance version of WHATSHAP. PWHATSHAP is embedded in a toolkit developed in Python and supports genomics datasets in standard file formats. Building on WHATSHAP, PWHATSHAP exhibits the same complexity exploring a number of possible solutions which is exponential in the coverage of the dataset. The parallel implementation on multi-core architectures allows for a relevant reduction of the execution time for haplotyping, while the provided results enjoy the same high accuracy as that provided by WHATSHAP, which increases with coverage. Due to its structure and management of the large datasets, the parallelisation of WHATSHAP posed demanding technical challenges, which have been addressed exploiting a high-level parallel programming framework. The result, PWHATSHAP, is a freely available toolkit that improves the efficiency of the analysis of genomics information.

  15. Chromatic perception of non-invasive lighting of cave paintings

    NASA Astrophysics Data System (ADS)

    Zoido, Jesús; Vazquez, Daniel; Álvarez, Antonio; Bernabeu, Eusebio; García, Ángel; Herraez, Juán A.; del Egido, Marian

    2009-08-01

    This work is intended to deal with the problems which arise when illuminanting Paleolithic cave paintings. We have carried out the spectral and colorimetric characterization of some paintings located in the Murcielagos (bats) cave (Zuheros, Córdoba, Spain). From this characterization, the chromatic changes produced under different lighting conditions are analysed. The damage function is also computed for the different illuminants used. From the results obtained, it is proposed an illuminant whose spectral distribution diminishes the damage by minimizing the absorption of radiation and optimises the color perception of the paintings in this cave. The procedure followed in this study can be applied to optimise the lighting systems used when illuminating any other art work

  16. Consensus for multi-agent systems with time-varying input delays

    NASA Astrophysics Data System (ADS)

    Yuan, Chengzhi; Wu, Fen

    2017-10-01

    This paper addresses the consensus control problem for linear multi-agent systems subject to uniform time-varying input delays and external disturbance. A novel state-feedback consensus protocol is proposed under the integral quadratic constraint (IQC) framework, which utilises not only the relative state information from neighbouring agents but also the real-time information of delays by means of the dynamic IQC system states for feedback control. Based on this new consensus protocol, the associated IQC-based control synthesis conditions are established and fully characterised as linear matrix inequalities (LMIs), such that the consensus control solution with optimal ? disturbance attenuation performance can be synthesised efficiently via convex optimisation. A numerical example is used to demonstrate the proposed approach.

  17. Thermal energy and economic analysis of a PCM-enhanced household envelope considering different climate zones in Morocco

    NASA Astrophysics Data System (ADS)

    Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine

    2018-07-01

    This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.

  18. Evolving optimised decision rules for intrusion detection using particle swarm paradigm

    NASA Astrophysics Data System (ADS)

    Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.

    2012-12-01

    The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.

  19. Finite Element Study into the effect of footwear temperature on the Forces transmitted to the foot during quasi- static compression loading

    NASA Astrophysics Data System (ADS)

    Shariatmadari, M. R.; English, R.; Rothwell, G.

    2010-06-01

    The determination of plantar stresses using computational footwear models which include temperature effects are crucial to predict foam performance in service and to aid material development and product design. Finite Element Method (FEM) provides an efficient computational framework to investigate the foot-footwear interaction. The aim of this research is to use FEM to investigate the effect of varying footwear temperature on plantar stresses. The results obtained will provide data which can be used to help optimise shoe design in terms of minimising damaging stresses in the foot particularly for individuals with diabetes who are susceptible to lower extremity complications. The FE simulation results showed significant reductions in foot stresses with the modifications from FE model (1) without footwear to model (2) with midsole only and to model (3) with midsole and insole. In summary, insole and midsole layers made from various foam materials aim to reduce the Ground Reaction Forces (GRF's) and foot stresses considerably and temperature variation can affect their cushioning and consequently the shock attenuation properties. The loss of footwear cushioning effect can have important clinical implications for those individuals with a history of lower limb overuse injuries or diabetes.

  20. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  1. Optimal integrated management of groundwater resources and irrigated agriculture in arid coastal regions

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Heck, V.

    2014-09-01

    Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.

  2. Optimization of Process Parameters for High Efficiency Laser Forming of Advanced High Strength Steels within Metallurgical Constraints

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, Ghazal; Griffiths, Jonathan; Dearden, Geoff; Edwardson, Stuart P.

    Laser forming (LF) has been shown to be a viable alternative to form automotive grade advanced high strength steels (AHSS). Due to their high strength, heat sensitivity and low conventional formability show early fractures, larger springback, batch-to-batch inconsistency and high tool wear. In this paper, optimisation of the LF process parameters has been conducted to further understand the impact of a surface heat treatment on DP1000. A FE numerical simulation has been developed to analyse the dynamic thermo-mechanical effects. This has been verified against empirical data. The goal of the optimisation has been to develop a usable process window for the LF of AHSS within strict metallurgical constraints. Results indicate it is possible to LF this material, however a complex relationship has been found between the generation and maintenance of hardness values in the heated zone. A laser surface hardening effect has been observed that could be beneficial to the efficiency of the process.

  3. An approximate dynamic programming approach to resource management in multi-cloud scenarios

    NASA Astrophysics Data System (ADS)

    Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo

    2017-03-01

    The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.

  4. Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.

    PubMed

    Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos

    2015-01-01

    The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.

  5. Performance optimisations for distributed analysis in ALICE

    NASA Astrophysics Data System (ADS)

    Betev, L.; Gheata, A.; Gheata, M.; Grigoras, C.; Hristov, P.

    2014-06-01

    Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.

  6. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    NASA Astrophysics Data System (ADS)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  7. Locating helicopter emergency medical service bases to optimise population coverage versus average response time.

    PubMed

    Garner, Alan A; van den Berg, Pieter L

    2017-10-16

    New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.

  8. On an efficient multilevel inverter assembly: structural savings and design optimisations

    NASA Astrophysics Data System (ADS)

    Choupan, Reza; Nazarpour, Daryoush; Golshannavaz, Sajjad

    2018-01-01

    This study puts forward an efficient unit cell to be taken in use in multilevel inverter assemblies. The proposed structure is in line with reductions in number of direct current (dc) voltage sources, insulated-gate bipolar transistors (IGBTs), gate driver circuits, installation area, and hence the implementation costs. Such structural savings do not sacrifice the technical performance of the proposed design wherein an increased number of output voltage levels is attained, interestingly. Targeting a techno-economic characteristic, the contemplated structure is included as the key unit of cascaded multilevel inverters. Such extensions require development of applicable design procedures. To this end, two efficient strategies are elaborated to determine the magnitudes of input dc voltage sources. As well, an optimisation process is developed to explore the optimal allocation of different parameters in overall performance of the proposed inverter. These parameters are investigated as the number of IGBTs, dc sources, diodes, and overall blocked voltage on switches. In the lights of these characteristics, a comprehensive analysis is established to compare the proposed design with the conventional and recently developed structures. Detailed simulation and experimental studies are conducted to assess the performance of the proposed design. The obtained results are discussed in depth.

  9. Efficient Genotyping of KRAS Mutant Non-Small Cell Lung Cancer Using a Multiplexed Droplet Digital PCR Approach.

    PubMed

    Pender, Alexandra; Garcia-Murillas, Isaac; Rana, Sareena; Cutts, Rosalind J; Kelly, Gavin; Fenwick, Kerry; Kozarewa, Iwanka; Gonzalez de Castro, David; Bhosle, Jaishree; O'Brien, Mary; Turner, Nicholas C; Popat, Sanjay; Downward, Julian

    2015-01-01

    Droplet digital PCR (ddPCR) can be used to detect low frequency mutations in oncogene-driven lung cancer. The range of KRAS point mutations observed in NSCLC necessitates a multiplex approach to efficient mutation detection in circulating DNA. Here we report the design and optimisation of three discriminatory ddPCR multiplex assays investigating nine different KRAS mutations using PrimePCR™ ddPCR™ Mutation Assays and the Bio-Rad QX100 system. Together these mutations account for 95% of the nucleotide changes found in KRAS in human cancer. Multiplex reactions were optimised on genomic DNA extracted from KRAS mutant cell lines and tested on DNA extracted from fixed tumour tissue from a cohort of lung cancer patients without prior knowledge of the specific KRAS genotype. The multiplex ddPCR assays had a limit of detection of better than 1 mutant KRAS molecule in 2,000 wild-type KRAS molecules, which compared favourably with a limit of detection of 1 in 50 for next generation sequencing and 1 in 10 for Sanger sequencing. Multiplex ddPCR assays thus provide a highly efficient methodology to identify KRAS mutations in lung adenocarcinoma.

  10. Efficient Genotyping of KRAS Mutant Non-Small Cell Lung Cancer Using a Multiplexed Droplet Digital PCR Approach

    PubMed Central

    Pender, Alexandra; Garcia-Murillas, Isaac; Rana, Sareena; Cutts, Rosalind J.; Kelly, Gavin; Fenwick, Kerry; Kozarewa, Iwanka; Gonzalez de Castro, David; Bhosle, Jaishree; O’Brien, Mary; Turner, Nicholas C.; Popat, Sanjay; Downward, Julian

    2015-01-01

    Droplet digital PCR (ddPCR) can be used to detect low frequency mutations in oncogene-driven lung cancer. The range of KRAS point mutations observed in NSCLC necessitates a multiplex approach to efficient mutation detection in circulating DNA. Here we report the design and optimisation of three discriminatory ddPCR multiplex assays investigating nine different KRAS mutations using PrimePCR™ ddPCR™ Mutation Assays and the Bio-Rad QX100 system. Together these mutations account for 95% of the nucleotide changes found in KRAS in human cancer. Multiplex reactions were optimised on genomic DNA extracted from KRAS mutant cell lines and tested on DNA extracted from fixed tumour tissue from a cohort of lung cancer patients without prior knowledge of the specific KRAS genotype. The multiplex ddPCR assays had a limit of detection of better than 1 mutant KRAS molecule in 2,000 wild-type KRAS molecules, which compared favourably with a limit of detection of 1 in 50 for next generation sequencing and 1 in 10 for Sanger sequencing. Multiplex ddPCR assays thus provide a highly efficient methodology to identify KRAS mutations in lung adenocarcinoma. PMID:26413866

  11. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  12. Design of a compact antenna with flared groundplane for a wearable breast hyperthermia system.

    PubMed

    Curto, Sergio; Prakash, Punit

    2015-01-01

    Currently available microwave hyperthermia systems for breast cancer treatment do not conform to the intact breast and provide limited control of heating patterns, thereby hindering an effective treatment. A compact patch antenna with a flared groundplane that may be integrated within a wearable hyperthermia system for the treatment of the intact breast disease is proposed. A 3D simulation-based approach was employed to optimise the antenna design with the objective of maximising the hyperthermia treatment volume (41 °C iso-therm) while maintaining good impedance matching. The optimised antenna design was fabricated and experimentally evaluated with ex vivo tissue measurements. The optimised compact antenna yielded a -10 dB bandwidth of 90 MHz centred at 915 MHz, and was capable of creating hyperthermia treatment volumes up to 14.4 cm(3) (31 mm × 28 mm × 32 mm) with an input power of 15 W. Experimentally measured reflection coefficient and transient temperature profiles were in good agreement with simulated profiles. Variations of + 50% in blood perfusion yielded variations in the treatment volume up to 11.5%. When compared to an antenna with a similar patch element employing a conventional rectangular groundplane, the antenna with flared groundplane afforded 22.3% reduction in required power levels to reach the same temperature, and yielded 2.4 times larger treatment volumes. The proposed patch antenna with a flared groundplane may be integrated within a wearable applicator for hyperthermia treatment of intact breast targets and has the potential to improve efficiency, increase patient comfort, and ultimately clinical outcomes.

  13. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  14. Molecular materials for high performance OPV devices (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jones, David J.

    2016-09-01

    We recently reported the high performing molecular donor for OPV devices based on a benzodithiophene core, a terthiophene bridge and a rhodamine acceptor (BTR) [1]. In this work we optimized side-chain placement of a known chromophore by ensuring the thiophene hexyl side-chains are regioregular, which should allow the chromophore to lie flat. The unexpected outcome was a nematic liquid crystalline material with significantly improved performance (now 9.6% PCE), excellent charge transport properties, reduced geminate recombination rates and excellent performance with active layers up to 400nm. Three phase changes were indicated by DSC analysis with a melt to a crystalline domain at 175 oC, transition to a nematic liquid crystalline domain at 186 oC and an isotropic melt at 196 oC. In our desire to better understand the structure property relationships of this class of p-type organic semiconductor we have synthesized a series of analogues where the length of the chromophore has been altered through modification of the oligothiophene bridge to generate, the monothiophene (BMR), the bisthiophene (BBR), the known terthiophene (BTR), the quaterthiophene (BQR) and the pentathiophene (BPR). BMR, BBR and BPR have clean melting points while BQR, like BTR shows a complicated series of phase transitions. Device efficiencies after solvent vapour annealing are BMR (3.5%), BBR (6.0%), BTR (9.3%), BQR (9.4%), and BPR (8.7%) unoptimised. OPV devices with BTR in the active layer are not stable under thermal annealing, however the bridge extended BQR and BPR form thermally stable devices. We are currently optimising these devices, but initial results indicate PCEs >9% for thermally annealed devices containing BQR, while BPR devices have not yet been optimised and have PCEs > 8%. In order to develop the device performance we have included BQR in ternary devices with the commercially available PTB7-Th and we report device efficiencies of over 10.5%. We are currently optimising device assembly and annealing conditions and relating these back to key materials properties. I will discuss the development of these new materials, their materials properties, structural data, and optimised device performance. I will examination of chromophore length on the Nematic Liquid Crystalline properties and on materials development and performance resulting in materials with > 9% PCE in OPV. [1] Sun, K.; Xiao, Z.; Lu, S.; Zajaczkowski, W.; Pisula, W.; Hanssen, E.; White, J. M.; Williamson, R. M.; Subbiah, J.; Ouyang, J.; Holmes, A. B.; Wong, W. W.; Jones, D. J., Nat. Commun. 2015, 6, 6013. DOI: 10.1038/ncomms7013

  15. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis

    PubMed Central

    Waterfall, Christy M.; Cobb, Benjamin D.

    2001-01-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702

  16. Cognitive Load Theory vs. Constructivist Approaches: Which Best Leads to Efficient, Deep Learning?

    ERIC Educational Resources Information Center

    Vogel-Walcutt, J. J.; Gebrim, J. B.; Bowers, C.; Carper, T. M.; Nicholson, D.

    2011-01-01

    Computer-assisted learning, in the form of simulation-based training, is heavily focused upon by the military. Because computer-based learning offers highly portable, reusable, and cost-efficient training options, the military has dedicated significant resources to the investigation of instructional strategies that improve learning efficiency…

  17. Elitist Binary Wolf Search Algorithm for Heuristic Feature Selection in High-Dimensional Bioinformatics Datasets.

    PubMed

    Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L

    2017-06-28

    Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.

  18. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  19. Energy Efficiency Challenges of 5G Small Cell Networks.

    PubMed

    Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang

    2017-05-01

    The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.

  20. Energy Efficiency Challenges of 5G Small Cell Networks

    PubMed Central

    Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang

    2017-01-01

    The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670

  1. Slicing Method for curved façade and window extraction from point clouds

    NASA Astrophysics Data System (ADS)

    Iman Zolanvari, S. M.; Laefer, Debra F.

    2016-09-01

    Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.

  2. B and V photometry and analysis of the eclipsing binary RZ CAS

    NASA Astrophysics Data System (ADS)

    Riazi, N.; Bagheri, M. R.; Faghihi, F.

    1994-01-01

    Photoelectric light curves of the eclipsing binary RZ Cas are presented for B and V filters. The light curves are analyzed for light and geometrical elements, starting with a previously suggested preliminary method. The approximate results thus obtained are then optimised through the Wilson-Devinney computer programs.

  3. A review and exploration of sociotechnical ergonomics.

    PubMed

    Dirkse van Schalkwyk, Riaan; Steenkamp, Rigard J

    2017-09-01

    A holistic review of ergonomic history shows that science remains important for general occupational health and safety (OSH), the broad society, culture, politics and the design of everyday things. Science provides an unconventional and multifaceted viewpoint exploring ergonomics from a social, corporate and OSH perspective. Ergonomic solutions from this mindset may redefine the science, and it will change with companies that change within this socially hyper-connected world. Authentic corporate social responsibility will counter 'misleadership' by not approaching ergonomics with an afterthought. The review concludes that ergonomics will be stronger with social respect and ergonomic thinking based on the optimisation of anthropometric data, digital human models, computer-aided tools, self-empowerment, job enrichment, work enlargement, physiology, industrial psychology, cybernetic ergonomics, operations design, ergonomic-friendly process technologies, ergonomic empowerment, behaviour-based safety, outcome-based employee wellness and fatigue risk management solutions, to mention a few.

  4. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  5. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    NASA Astrophysics Data System (ADS)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  6. DryLab® optimised two-dimensional high performance liquid chromatography for differentiation of ephedrine and pseudoephedrine based methamphetamine samples.

    PubMed

    Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A

    2014-11-01

    In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.

  7. Improved packing of protein side chains with parallel ant colonies

    PubMed Central

    2014-01-01

    Introduction The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. Methods We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. Results We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. Conclusions This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms. PMID:25474164

  8. Principal component of explained variance: An efficient and optimal data dimension reduction framework for association studies.

    PubMed

    Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie

    2018-05-01

    The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.

  9. Analysis of dynamic cerebral autoregulation using an ARX model based on arterial blood pressure and middle cerebral artery velocity simulation.

    PubMed

    Liu, Y; Allen, R

    2002-09-01

    The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.

  10. Faster methods for estimating arc centre position during VAR and results from Ti-6Al-4V and INCONEL 718 alloys

    NASA Astrophysics Data System (ADS)

    Nair, B. G.; Winter, N.; Daniel, B.; Ward, R. M.

    2016-07-01

    Direct measurement of the flow of electric current during VAR is extremely difficult due to the aggressive environment as the arc process itself controls the distribution of current. In previous studies the technique of “magnetic source tomography” was presented; this was shown to be effective but it used a computationally intensive iterative method to analyse the distribution of arc centre position. In this paper we present faster computational methods requiring less numerical optimisation to determine the centre position of a single distributed arc both numerically and experimentally. Numerical validation of the algorithms were done on models and experimental validation on measurements based on titanium and nickel alloys (Ti6Al4V and INCONEL 718). The results are used to comment on the effects of process parameters on arc behaviour during VAR.

  11. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  12. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  13. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    NASA Astrophysics Data System (ADS)

    Helle, K. B.; Müller, T. O.; Astrup, P.; Dyve, J. E.

    2014-05-01

    Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64 source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results. The DOT runs on a server and can be accessed via common web browsers; it can also be installed locally.

  14. Optimization of intrinsic layer thickness, dopant layer thickness and concentration for a-SiC/a-SiGe multilayer solar cell efficiency performance using Silvaco software

    NASA Astrophysics Data System (ADS)

    Yuan, Wong Wei; Natashah Norizan, Mohd; Salwani Mohamad, Ili; Jamalullail, Nurnaeimah; Hidayah Saad, Nor

    2017-11-01

    Solar cell is expanding as green renewable alternative to conventional fossil fuel electricity generation, but compared to other land-used electrical generators, it is a comparative beginner. Many applications covered by solar cells starting from low power mobile devices, terrestrial, satellites and many more. To date, the highest efficiency solar cell is given by GaAs based multilayer solar cell. However, this material is very expensive in fabrication and material costs compared to silicon which is cheaper due to the abundance of supply. Thus, this research is devoted to develop multilayer solar cell by combining two different layers of P-I-N structures with silicon carbide and silicon germanium. This research focused on optimising the intrinsic layer thickness, p-doped layer thickness and concentration, n-doped layer thickness and concentration in achieving the highest efficiency. As a result, both single layer a-SiC and a-SiGe showed positive efficiency improvement with the record of 27.19% and 9.07% respectively via parametric optimization. The optimized parameters is then applied on both SiC and SiGe P-I-N layers and resulted the convincing efficiency of 33.80%.

  15. Modelling of human walking to optimise the function of ankle-foot orthosis in Guillan-Barré patients with drop foot.

    PubMed

    Jamshidi, N; Rostami, M; Najarian, S; Menhaj, M B; Saadatnia, M; Firooz, S

    2009-04-01

    This paper deals with the dynamic modelling of human walking. The main focus of this research was to optimise the function of the orthosis in patients with neuropathic feet, based on the kinematics data from different categories of neuropathic patients. The patient's body on the sagittal plane was modelled for calculating the torques generated in joints. The kinematics data required for mathematical modelling of the patients were obtained from the films of patients captured by high speed camera, and then the films were analysed through a motion analysis software. An inverse dynamic model was used for estimating the spring coefficient. In our dynamic model, the role of muscles was substituted by adding a spring-damper between the shank and ankle that could compensate for their weakness by designing ankle-foot orthoses based on the kinematics data obtained from the patients. The torque generated in the ankle was varied by changing the spring constant. Therefore, it was possible to decrease the torque generated in muscles which could lead to the design of more comfortable and efficient orthoses. In this research, unlike previous research activities, instead of studying the abnormal gait or modelling the ankle-foot orthosis separately, the function of the ankle-foot orthosis on the abnormal gait has been quantitatively improved through a correction of the torque.

  16. Optimisation of Critical Infrastructure Protection: The SiVe Project on Airport Security

    NASA Astrophysics Data System (ADS)

    Breiing, Marcus; Cole, Mara; D'Avanzo, John; Geiger, Gebhard; Goldner, Sascha; Kuhlmann, Andreas; Lorenz, Claudia; Papproth, Alf; Petzel, Erhard; Schwetje, Oliver

    This paper outlines the scientific goals, ongoing work and first results of the SiVe research project on critical infrastructure security. The methodology is generic while pilot studies are chosen from airport security. The outline proceeds in three major steps, (1) building a threat scenario, (2) development of simulation models as scenario refinements, and (3) assessment of alternatives. Advanced techniques of systems analysis and simulation are employed to model relevant airport structures and processes as well as offences. Computer experiments are carried out to compare and optimise alternative solutions. The optimality analyses draw on approaches to quantitative risk assessment recently developed in the operational sciences. To exploit the advantages of the various techniques, an integrated simulation workbench is build up in the project.

  17. Computer-based learning: interleaving whole and sectional representation of neuroanatomy.

    PubMed

    Pani, John R; Chariker, Julia H; Naaz, Farah

    2013-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). Copyright © 2012 American Association of Anatomists.

  18. Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy

    PubMed Central

    Pani, John R.; Chariker, Julia H.; Naaz, Farah

    2015-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). PMID:22761001

  19. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  20. Efficient 'Foton' electric-discharge KrCl laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panchenko, Aleksei N; Tarasenko, Viktor F

    The design of the 'Foton' electric-discharge laser, optimised for operation on the basis of KrCl* molecules, and its energy parameters were investigated. At {lambda} = 222 nm the radiation energy was up to 250 mJ per pulse. The specific output radiation energy was 2.5 J litre{sup -1} and the laser efficiency was in excess of 0.8%. The possibility of further improvement of the characteristics of electric-discharge KrCl lasers are discussed. (lasers)

  1. Efficient Mean Field Variational Algorithm for Data Assimilation (Invited)

    NASA Astrophysics Data System (ADS)

    Vrettas, M. D.; Cornford, D.; Opper, M.

    2013-12-01

    Data assimilation algorithms combine available observations of physical systems with the assumed model dynamics in a systematic manner, to produce better estimates of initial conditions for prediction. Broadly they can be categorized in three main approaches: (a) sequential algorithms, (b) sampling methods and (c) variational algorithms which transform the density estimation problem to an optimization problem. However, given finite computational resources, only a handful of ensemble Kalman filters and 4DVar algorithms have been applied operationally to very high dimensional geophysical applications, such as weather forecasting. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the ';optimal' posterior distribution over the continuous time states, within a family of non-stationary Gaussian processes. Our initial work on variational Bayesian approaches to data assimilation, unlike the well-known 4DVar method which seeks only the most probable solution, computes the best time varying Gaussian process approximation to the posterior smoothing distribution for dynamical systems that can be represented by stochastic differential equations. This approach was based on minimising the Kullback-Leibler divergence, over paths, between the true posterior and our Gaussian process approximation. Whilst the observations were informative enough to keep the posterior smoothing density close to Gaussian the algorithm proved very effective on low dimensional systems (e.g. O(10)D). However for higher dimensional systems, the high computational demands make the algorithm prohibitively expensive. To overcome the difficulties presented in the original framework and make our approach more efficient in higher dimensional systems we have been developing a new mean field version of the algorithm which treats the state variables at any given time as being independent in the posterior approximation, while still accounting for their relationships in the mean solution arising from the original system dynamics. Here we present this new mean field approach, illustrating its performance on a range of benchmark data assimilation problems whose dimensionality varies from O(10) to O(10^3)D. We emphasise that the variational Bayesian approach we adopt, unlike other variational approaches, provides a natural bound on the marginal likelihood of the observations given the model parameters which also allows for inference of (hyper-) parameters such as observational errors, parameters in the dynamical model and model error representation. We also stress that since our approach is intrinsically parallel it can be implemented very efficiently to address very long data assimilation time windows. Moreover, like most traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem therefore its complexity can be tuned to the available computational resources. We finish with a sketch of possible future directions.

  2. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    PubMed

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  3. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  4. Influence of some design parameters on the thermal performance of domestic refrigerator appliances

    NASA Astrophysics Data System (ADS)

    Rebora, Alessandro; Senarega, Maurizio; Tagliafico, Luca A.

    2006-07-01

    This paper presents a thermal study on chest-freezers, the small refrigerators used in domestic and supermarket applications. A thermal and energy model of a particular kind of these refrigerators, the “hot-wall” (or “skin condenser”) refrigerator, is developed and used to perform sensitivity and design optimisation analysis for given working temperatures and useful volume of the refrigerated cell. A finite-element heat transfer model of the refrigerator box is coupled to the complete thermodynamic model of the refrigerating plant, including real working conditions (compressor efficiency, friction pressure losses and so on). A sensitivity study of the main design parameters affecting the global refrigerator performance has been developed (for fixed working temperatures) with reference to the thickness of the metallic plates, to the evaporator and condenser tube diameters and to the evaporator tube pitch (with fixed evaporator-to-condenser tube pitch ratio). The results obtained show that the proposed sensitivity analysis can yield quite reliable results (in comparison with much more complex, albeit more accurate mathematical optimisation algorithms) using small computational resources. The great importance of 2-D heat conduction in the metallic plates is shown, evidencing how the plate thickness and the evaporator and condenser tube diameters affect the global performance of the system according to the well-known “fin efficiency” effect. The influence of the evaporator and condenser tube diameters on the friction pressure losses is also outlined. Some practical suggestions are made in conclusion, regarding the criteria which should be adopted in the thermal design of a hot-wall refrigerator.

  5. Evaluation of simulation alternatives for the brute-force ray-tracing approach used in backlight design

    NASA Astrophysics Data System (ADS)

    Desnijder, Karel; Hanselaer, Peter; Meuret, Youri

    2016-04-01

    A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.

  6. Effet de l'énergie du faisceau d'ions servant à l'assistance du dépôt de matériaux organiques utilisés pour réaliser des diodes électroluminescentes

    NASA Astrophysics Data System (ADS)

    Antony, R.; Moliton, A.; Ratier, B.

    1998-06-01

    Light emitting diode based on the structure ITO/Alq3/Ca-Al lead to enhanced quantum efficiency when the Alq3 active layer is obtained by IBAD (Ion Beam Assisted Deposition): with Iodine ions, the optimization (quantum efficiency multiplied by a factor10) is obtained for an ion energy equal to 100eV. La réalisation de diodes électroluminescentes basées sur la structure ITO/Alq3/Ca-Al conduit à des performances améliorées lorsque le dépôt de la couche active Alq3 est effectué avec l'assistance d'un faisceau d'ions; l'optimisation (rendement quantique interne accru d'un ordre de grandeur) correspond à des ions Iode d'énergie 100eV.

  7. Computer simulation of electron flow in linear-beam microwave tubes

    NASA Astrophysics Data System (ADS)

    Kumar, Lalit

    1990-12-01

    The computer simulation of electron flow in linear-beam microwave tubes, such as a travelling-wave tube (TWT) and klystron, is used for designing and optimising the electron gun and collector and for analysing the large-signal beam-wave interaction phenomenon. Major aspects of simulation of electron flow in static and rf fields present in such tubes are discussed. Some advancements made in this respect and results obtained from computer programs developed by the research group at CEERI for a gridded electron gun, depressed collector, and large-signal analysis of TWT and klystron are presented.

  8. Multidetector CT radiation dose optimisation in adults: short- and long-term effects of a clinical audit.

    PubMed

    Tack, Denis; Jahnen, Andreas; Kohler, Sarah; Harpes, Nico; De Maertelaer, Viviane; Back, Carlo; Gevenois, Pierre Alain

    2014-01-01

    To report short- and long-term effects of an audit process intended to optimise the radiation dose from multidetector row computed tomography (MDCT). A survey of radiation dose from all eight MDCT departments in the state of Luxembourg performed in 2007 served as baseline, and involved the most frequently imaged regions (head, sinus, cervical spine, thorax, abdomen, and lumbar spine). CT dose index volume (CTDIvol), dose-length product per acquisition (DLP/acq), and DLP per examination (DLP/exa) were recorded, and their mean, median, 25th and 75th percentiles compared. In 2008, an audit conducted in each department helped to optimise doses. In 2009 and 2010, two further surveys evaluated the audit's impact on the dose delivered. Between 2007 and 2009, DLP/exa significantly decreased by 32-69 % for all regions (P < 0.001) except the lumbar spine (5 %, P = 0.455). Between 2009 and 2010, DLP/exa significantly decreased by 13-18 % for sinus, cervical and lumbar spine (P ranging from 0.016 to less than 0.001). Between 2007 and 2010, DLP/exa significantly decreased for all regions (18-75 %, P < 0.001). Collective dose decreased by 30 % and the 75th percentile (diagnostic reference level, DRL) by 20-78 %. The audit process resulted in long-lasting dose reduction, with DRLs reduced by 20-78 %, mean DLP/examination by 18-75 %, and collective dose by 30 %. • External support through clinical audit may optimise default parameters of routine CT. • Reduction of 75th percentiles used as reference diagnostic levels is 18-75 %. • The effect of this audit is sustainable over time. • Dose savings through optimisation can be added to those achievable through CT.

  9. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  10. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, James; Ford, Ian J.

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, andmore » complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable “gauge” transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.« less

  11. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  12. Response surface methodology to optimise Accelerated Solvent Extraction of steviol glycosides from Stevia rebaudiana Bertoni leaves.

    PubMed

    Jentzer, Jean-Baptiste; Alignan, Marion; Vaca-Garcia, Carlos; Rigal, Luc; Vilarem, Gérard

    2015-01-01

    Following the approval of steviol glycosides as a food additive in Europe in December 2011, large-scale stevia cultivation will have to be developed within the EU. Thus there is a need to increase the efficiency of stevia evaluation through germplasm enhancement and agronomic improvement programs. To address the need for faster and reproducible sample throughput, conditions for automated extraction of dried stevia leaves using Accelerated Solvent Extraction were optimised. A response surface methodology was used to investigate the influence of three factors: extraction temperature, static time and cycle number on the stevioside and rebaudioside A extraction yields. The model showed that all the factors had an individual influence on the yield. Optimum extraction conditions were set at 100 °C, 4 min and 1 cycle, which yielded 91.8% ± 3.4% of total extractable steviol glycosides analysed. An additional optimisation was achieved by reducing the grind size of the leaves giving a final yield of 100.8% ± 3.3%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A review on simple assembly line balancing type-e problem

    NASA Astrophysics Data System (ADS)

    Jusop, M.; Rashid, M. F. F. Ab

    2015-12-01

    Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.

  14. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  15. Demystifying the GMAT: Computer-Based Testing Terms

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2012-01-01

    Computer-based testing can be a powerful means to make all aspects of test administration not only faster and more efficient, but also more accurate and more secure. While the Graduate Management Admission Test (GMAT) exam is a computer adaptive test, there are other approaches. This installment presents a primer of computer-based testing terms.

  16. A scattering approach to sea wave diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corradini, M. L., E-mail: letizia.corradini@unicam.it; Garbuglia, M., E-mail: milena.garbuglia@unicam.it; Maponi, P., E-mail: pierluigi.maponi@unicam.it

    This paper intends to show a model for the diffraction of sea waves approaching an OWC device, which converts the sea waves motion into mechanical energy and then electrical energy. This is a preliminary study to the optimisation of the device, in fact the computation of sea waves diffraction around the device allows the estimation of the sea waves energy which enters into the device. The computation of the diffraction phenomenon is the result of a sea waves scattering problem, solved with an integral equation method.

  17. Sustainable Mining Land Use for Lignite Based Energy Projects

    NASA Astrophysics Data System (ADS)

    Dudek, Michal; Krysa, Zbigniew

    2017-12-01

    This research aims to discuss complex lignite based energy projects economic viability and its impact on sustainable land use with respect to project risk and uncertainty, economics, optimisation (e.g. Lerchs and Grossmann) and importance of lignite as fuel that may be expressed in situ as deposit of energy. Sensitivity analysis and simulation consist of estimated variable land acquisition costs, geostatistics, 3D deposit block modelling, electricity price considered as project product price, power station efficiency and power station lignite processing unit cost, CO2 allowance costs, mining unit cost and also lignite availability treated as lignite reserves kriging estimation error. Investigated parameters have nonlinear influence on results so that economically viable amount of lignite in optimal pit varies having also nonlinear impact on land area required for mining operation.

  18. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    NASA Astrophysics Data System (ADS)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  19. Biodegradation of free cyanide and subsequent utilisation of biodegradation by-products by Bacillus consortia: optimisation using response surface methodology.

    PubMed

    Mekuto, Lukhanyo; Ntwampe, Seteno Karabo Obed; Jackson, Vanessa Angela

    2015-07-01

    A mesophilic alkali-tolerant bacterial consortium belonging to the Bacillus genus was evaluated for its ability to biodegrade high free cyanide (CN(-)) concentration (up to 500 mg CN(-)/L), subsequent to the oxidation of the formed ammonium and nitrates in a continuous bioreactor system solely supplemented with whey waste. Furthermore, an optimisation study for successful cyanide biodegradation by this consortium was evaluated in batch bioreactors (BBs) using response surface methodology (RSM). The input variables, that is, pH, temperature and whey-waste concentration, were optimised using a numerical optimisation technique where the optimum conditions were found to be as follows: pH 9.88, temperature 33.60 °C and whey-waste concentration of 14.27 g/L, under which 206.53 mg CN(-)/L in 96 h can be biodegraded by the microbial species from an initial cyanide concentration of 500 mg CN(-)/L. Furthermore, using the optimised data, cyanide biodegradation in a continuous mode was evaluated in a dual-stage packed-bed bioreactor (PBB) connected in series to a pneumatic bioreactor system (PBS) used for simultaneous nitrification, including aerobic denitrification. The whey-supported Bacillus sp. culture was not inhibited by the free cyanide concentration of up to 500 mg CN(-)/L, with an overall degradation efficiency of ≥ 99 % with subsequent nitrification and aerobic denitrification of the formed ammonium and nitrates over a period of 80 days. This is the first study to report free cyanide biodegradation at concentrations of up to 500 mg CN(-)/L in a continuous system using whey waste as a microbial feedstock. The results showed that the process has the potential for the bioremediation of cyanide-containing wastewaters.

  20. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  1. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  2. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  3. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  4. Dataset on Investigating the role of onsite learning in the optimisation of craft gang's productivity in the construction industry.

    PubMed

    Ugulu, Rex Asibuodu; Allen, Stephen

    2017-12-01

    The data presented in this article is an original data on "Investigating the role of onsite learning in the optimisation of craft gang's productivity in the construction industry". This article describes the constraints influencing craft gang's productivity and the influence of onsite learning on the blockwork craft gang's productivity. It also presented the method of data collection, using a semi-structured interview and an observation method to collect data from construction organisations. We provided statistics on the top most important constraints affecting the craft gang's productivity using 3-D Bar charts. In addition, we computed the correlation coefficients and the regression model on the influence of onsite learning on craft gang's productivity using the man-hour as the dependent variable. The relationship between blockwork inputs and cycle numbers was determined at 5% significance level. Finally, we presented data information on the application of the learning curve theory using the unit straight-line model equations and computed the learning rate of the observed craft gang's blockwork repetitive work.

  5. The development and optimisation of a primary care-based whole system complex intervention (CARE Plus) for patients with multimorbidity living in areas of high socioeconomic deprivation

    PubMed Central

    O'Brien, Rosaleen; Fitzpatrick, Bridie; Higgins, Maria; Guthrie, Bruce; Watt, Graham; Wyke, Sally

    2016-01-01

    Objectives To develop and optimise a primary care-based complex intervention (CARE Plus) to enhance the quality of life of patients with multimorbidity in the deprived areas. Methods Six co-design discussion groups involving 32 participants were held separately with multimorbid patients from the deprived areas, voluntary organisations, general practitioners and practice nurses working in the deprived areas. This was followed by piloting in two practices and further optimisation based on interviews with 11 general practitioners, 2 practice nurses and 6 participating multimorbid patients. Results Participants endorsed the need for longer consultations, relational continuity and a holistic approach. All felt that training and support of the health care staff was important. Most participants welcomed the idea of additional self-management support, though some practitioners were dubious about whether patients would use it. The pilot study led to changes including a revised care plan, the inclusion of mindfulness-based stress reduction techniques in the support of practitioners and patients, and the stream-lining of the written self-management support material for patients. Discussion We have co-designed and optimised an augmented primary care intervention involving a whole-system approach to enhance quality of life in multimorbid patients living in the deprived areas. CARE Plus will next be tested in a phase 2 cluster randomised controlled trial. PMID:27068113

  6. Remote care of a patient with stroke in rural Trinidad: use of telemedicine to optimise global neurological care.

    PubMed

    Reyes, Antonio Jose; Ramcharan, Kanterpersad

    2016-08-02

    We report a patient driven home care system that successfully assisted 24/7 with the management of a 68-year-old woman after a stroke-a global illness. The patient's caregiver and physician used computer devices, smartphones and internet access for information exchange. Patient, caregiver, family and physician satisfaction, coupled with outcome and cost were indictors of quality of care. The novelty of this basic model of teleneurology is characterised by implementing a patient/caregiver driven system designed to improve access to cost-efficient neurological care, which has potential for use in primary, secondary and tertiary levels of healthcare in rural and underserved regions of the world. We suggest involvement of healthcare stakeholders in teleneurology to address this global problem of limited access to neurological care. This model can facilitate the management of neurological diseases, impact on outcome, reduce frequency of consultations and hospitalisations, facilitate teaching of healthcare workers and promote research. 2016 BMJ Publishing Group Ltd.

  7. Integration of vehicle yaw stabilisation and rollover prevention through nonlinear hierarchical control allocation

    NASA Astrophysics Data System (ADS)

    Alberding, Matthäus B.; Tjønnås, Johannes; Johansen, Tor A.

    2014-12-01

    This work presents an approach to rollover prevention that takes advantage of the modular structure and optimisation properties of the control allocation paradigm. It eliminates the need for a stabilising roll controller by introducing rollover prevention as a constraint on the control allocation problem. The major advantage of this approach is the control authority margin that remains with a high-level controller even during interventions for rollover prevention. In this work, the high-level control is assigned to a yaw stabilising controller. It could be replaced by any other controller. The constraint for rollover prevention could be replaced by or extended to different control objectives. This work uses differential braking for actuation. The use of additional or different actuators is possible. The developed control algorithm is computationally efficient and suitable for low-cost automotive electronic control units. The predictive design of the rollover prevention constraint does not require any sensor equipment in addition to the yaw controller. The method is validated using an industrial multi-body vehicle simulation environment.

  8. The Goal Specificity Effect on Strategy Use and Instructional Efficiency during Computer-Based Scientific Discovery Learning

    ERIC Educational Resources Information Center

    Kunsting, Josef; Wirth, Joachim; Paas, Fred

    2011-01-01

    Using a computer-based scientific discovery learning environment on buoyancy in fluids we investigated the "effects of goal specificity" (nonspecific goals vs. specific goals) for two goal types (problem solving goals vs. learning goals) on "strategy use" and "instructional efficiency". Our empirical findings close an important research gap,…

  9. Microfluidic biosensing systems. Part I. Development and optimisation of enzymatic chemiluminescent micro-biosensors based on silicon microchips.

    PubMed

    Davidsson, Richard; Genin, Frédéric; Bengtsson, Martin; Laurell, Thomas; Emnéus, Jenny

    2004-10-01

    Chemiluminescent (CL) enzyme-based flow-through microchip biosensors (micro-biosensors) for detection of glucose and ethanol were developed for the purpose of monitoring real-time production and release of glucose and ethanol from microchip immobilised yeast cells. Part I of this study focuses on the development and optimisation of the micro-biosensors in a microfluidic sequential injection analysis (microSIA) system. Glucose oxidase (GOX) or alcohol oxidase (AOX) was co-immobilised with horseradish peroxidase (HRP) on porous silicon flow through microchips. The hydrogen peroxide produced from oxidation of the corresponding analyte (glucose or ethanol) took part in the chemiluminescent (CL) oxidation of luminol catalysed by HRP enhanced by addition of p-iodophenol (PIP). All steps in the microSIA system, including control of syringe pump, multiposition valve (MPV) and data readout, were computer controlled. The influence of flow rate and luminol- and PIP concentration were investigated using a 2(3)-factor experiment using the GOX-HRP sensor. It was found that all estimated single factors and the highest order of interaction were significant. The optimum was found at 250 microM luminol and 150 microM PIP at a flow rate of 18 microl min(-1), the latter as a compromise between signal intensity and analysis time. Using the optimised system settings one sample was processed within 5 min. Two different immobilisation chemistries were investigated for both micro-biosensors based on 3-aminopropyltriethoxsilane (APTS)- or polyethylenimine (PEI) functionalisation followed by glutaraldehyde (GA) activation. GOX-HRP micro-biosensors responded linear in a log-log format within the range 10-1000 microM glucose. Both had an operational stability of at least 8 days, but the PEI-GOX-HRP sensor was more sensitive. The AOX-HRP micro-biosensors responded linear (log-log) in the range between 1 and 10 mM ethanol, but the PEI-AOX-HRP sensor was in general more sensitive. Both sensors had an operational stability of at least 8 h, but with a half-life of 2-3 days.

  10. A fast indirect method to compute functions of genomic relationships concerning genotyped and ungenotyped individuals, for diversity management.

    PubMed

    Colleau, Jean-Jacques; Palhière, Isabelle; Rodríguez-Ramilo, Silvia T; Legarra, Andres

    2017-12-01

    Pedigree-based management of genetic diversity in populations, e.g., using optimal contributions, involves computation of the [Formula: see text] type yielding elements (relationships) or functions (usually averages) of relationship matrices. For pedigree-based relationships [Formula: see text], a very efficient method exists. When all the individuals of interest are genotyped, genomic management can be addressed using the genomic relationship matrix [Formula: see text]; however, to date, the computational problem of efficiently computing [Formula: see text] has not been well studied. When some individuals of interest are not genotyped, genomic management should consider the relationship matrix [Formula: see text] that combines genotyped and ungenotyped individuals; however, direct computation of [Formula: see text] is computationally very demanding, because construction of a possibly huge matrix is required. Our work presents efficient ways of computing [Formula: see text] and [Formula: see text], with applications on real data from dairy sheep and dairy goat breeding schemes. For genomic relationships, an efficient indirect computation with quadratic instead of cubic cost is [Formula: see text], where Z is a matrix relating animals to genotypes. For the relationship matrix [Formula: see text], we propose an indirect method based on the difference between vectors [Formula: see text], which involves computation of [Formula: see text] and of products such as [Formula: see text] and [Formula: see text], where [Formula: see text] is a working vector derived from [Formula: see text]. The latter computation is the most demanding but can be done using sparse Cholesky decompositions of matrix [Formula: see text], which allows handling very large genomic and pedigree data files. Studies based on simulations reported in the literature show that the trends of average relationships in [Formula: see text] and [Formula: see text] differ as genomic selection proceeds. When selection is based on genomic relationships but management is based on pedigree data, the true genetic diversity is overestimated. However, our tests on real data from sheep and goat obtained before genomic selection started do not show this. We present efficient methods to compute elements and statistics of the genomic relationships [Formula: see text] and of matrix [Formula: see text] that combines ungenotyped and genotyped individuals. These methods should be useful to monitor and handle genomic diversity.

  11. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  12. Optimisation of quantitative lung SPECT applied to mild COPD: a software phantom simulation study.

    PubMed

    Norberg, Pernilla; Olsson, Anna; Alm Carlsson, Gudrun; Sandborg, Michael; Gustafsson, Agnetha

    2015-01-01

    The amount of inhomogeneities in a (99m)Tc Technegas single-photon emission computed tomography (SPECT) lung image, caused by reduced ventilation in lung regions affected by chronic obstructive pulmonary disease (COPD), is correlated to disease advancement. A quantitative analysis method, the CVT method, measuring these inhomogeneities was proposed in earlier work. To detect mild COPD, which is a difficult task, optimised parameter values are needed. In this work, the CVT method was optimised with respect to the parameter values of acquisition, reconstruction and analysis. The ordered subset expectation maximisation (OSEM) algorithm was used for reconstructing the lung SPECT images. As a first step towards clinical application of the CVT method in detecting mild COPD, this study was based on simulated SPECT images of an advanced anthropomorphic lung software phantom including respiratory and cardiac motion, where the mild COPD lung had an overall ventilation reduction of 5%. The best separation between healthy and mild COPD lung images as determined using the CVT measure of ventilation inhomogeneity and 125 MBq (99m)Tc was obtained using a low-energy high-resolution collimator (LEHR) and a power 6 Butterworth post-filter with a cutoff frequency of 0.6 to 0.7 cm(-1). Sixty-four reconstruction updates and a small kernel size should be used when the whole lung is analysed, and for the reduced lung a greater number of updates and a larger kernel size are needed. A LEHR collimator and 125 (99m)Tc MBq together with an optimal combination of cutoff frequency, number of updates and kernel size, gave the best result. Suboptimal selections of either cutoff frequency, number of updates and kernel size will reduce the imaging system's ability to detect mild COPD in the lung phantom.

  13. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    NASA Astrophysics Data System (ADS)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm Optimisation (IPSO), Fully Informed Particle Swarm (FIPS), and weighted FIPS (wFIPS). Finally, an advanced sensitivity analysis using the Latin Hypercube One-At-a-Time (LH-OAT) method and user-friendly plotting summaries facilitate the interpretation and assessment of the calibration/optimisation results. We validate hydroPSO against the standard PSO algorithm (SPSO-2007) employing five test functions commonly used to assess the performance of optimisation algorithms. Additionally, we illustrate how the performance of the optimization/calibration engine is boosted by using several of the fine-tune options included in hydroPSO. Finally, we show how to interface SWAT-2005 with hydroPSO to calibrate a semi-distributed hydrological model for the Ega River basin in Spain, and how to interface MODFLOW-2000 and hydroPSO to calibrate a groundwater flow model for the regional aquifer of the Pampa del Tamarugal in Chile. We limit the applications of hydroPSO to study cases dealing with surface water and groundwater models as these two are the authors' areas of expertise. However, based on the flexibility of hydroPSO we believe this package can be implemented to any model code requiring some form of parameter estimation.

  15. Minimum Requirements for Accurate and Efficient Real-Time On-Chip Spike Sorting

    PubMed Central

    Navajas, Joaquin; Barsakcioglu, Deren Y.; Eftekhar, Amir; Jackson, Andrew; Constandinou, Timothy G.; Quiroga, Rodrigo Quian

    2014-01-01

    Background Extracellular recordings are performed by inserting electrodes in the brain, relaying the signals to external power-demanding devices, where spikes are detected and sorted in order to identify the firing activity of different putative neurons. A main caveat of these recordings is the necessity of wires passing through the scalp and skin in order to connect intracortical electrodes to external amplifiers. The aim of this paper is to evaluate the feasibility of an implantable platform (i.e. a chip) with the capability to wirelessly transmit the neural signals and perform real-time on-site spike sorting. New Method We computationally modelled a two-stage implementation for online, robust, and efficient spike sorting. In the first stage, spikes are detected on-chip and streamed to an external computer where mean templates are created and sent back to the chip. In the second stage, spikes are sorted in real-time through template matching. Results We evaluated this procedure using realistic simulations of extracellular recordings and describe a set of specifications that optimise performance while keeping to a minimum the signal requirements and the complexity of the calculations. Comparison with Existing Methods A key bottleneck for the development of long-term BMIs is to find an inexpensive method for real-time spike sorting. Here, we simulated a solution to this problem that uses both offline and online processing of the data. Conclusions Hardware implementations of this method therefore enable low-power long-term wireless transmission of multiple site extracellular recordings, with application to wireless BMIs or closed-loop stimulation designs. PMID:24769170

  16. An efficient formulation of robot arm dynamics for control and computer simulation

    NASA Astrophysics Data System (ADS)

    Lee, C. S. G.; Nigam, R.

    This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.

  17. BioFed: federated query processing over life sciences linked open data.

    PubMed

    Hasnain, Ali; Mehmood, Qaiser; Sana E Zainab, Syeda; Saleem, Muhammad; Warren, Claude; Zehra, Durre; Decker, Stefan; Rebholz-Schuhmann, Dietrich

    2017-03-15

    Biomedical data, e.g. from knowledge bases and ontologies, is increasingly made available following open linked data principles, at best as RDF triple data. This is a necessary step towards unified access to biological data sets, but this still requires solutions to query multiple endpoints for their heterogeneous data to eventually retrieve all the meaningful information. Suggested solutions are based on query federation approaches, which require the submission of SPARQL queries to endpoints. Due to the size and complexity of available data, these solutions have to be optimised for efficient retrieval times and for users in life sciences research. Last but not least, over time, the reliability of data resources in terms of access and quality have to be monitored. Our solution (BioFed) federates data over 130 SPARQL endpoints in life sciences and tailors query submission according to the provenance information. BioFed has been evaluated against the state of the art solution FedX and forms an important benchmark for the life science domain. The efficient cataloguing approach of the federated query processing system 'BioFed', the triple pattern wise source selection and the semantic source normalisation forms the core to our solution. It gathers and integrates data from newly identified public endpoints for federated access. Basic provenance information is linked to the retrieved data. Last but not least, BioFed makes use of the latest SPARQL standard (i.e., 1.1) to leverage the full benefits for query federation. The evaluation is based on 10 simple and 10 complex queries, which address data in 10 major and very popular data sources (e.g., Dugbank, Sider). BioFed is a solution for a single-point-of-access for a large number of SPARQL endpoints providing life science data. It facilitates efficient query generation for data access and provides basic provenance information in combination with the retrieved data. BioFed fully supports SPARQL 1.1 and gives access to the endpoint's availability based on the EndpointData graph. Our evaluation of BioFed against FedX is based on 20 heterogeneous federated SPARQL queries and shows competitive execution performance in comparison to FedX, which can be attributed to the provision of provenance information for the source selection. Developing and testing federated query engines for life sciences data is still a challenging task. According to our findings, it is advantageous to optimise the source selection. The cataloguing of SPARQL endpoints, including type and property indexing, leads to efficient querying of data resources over the Web of Data. This could even be further improved through the use of ontologies, e.g., for abstract normalisation of query terms.

  18. Kinematic models of the upper limb joints for multibody kinematics optimisation: An overview.

    PubMed

    Duprey, Sonia; Naaim, Alexandre; Moissenet, Florent; Begon, Mickaël; Chèze, Laurence

    2017-09-06

    Soft tissue artefact (STA), i.e. the motion of the skin, fat and muscles gliding on the underlying bone, may lead to a marker position error reaching up to 8.7cm for the particular case of the scapula. Multibody kinematics optimisation (MKO) is one of the most efficient approaches used to reduce STA. It consists in minimising the distance between the positions of experimental markers on a subject skin and the simulated positions of the same markers embedded on a kinematic model. However, the efficiency of MKO directly relies on the chosen kinematic model. This paper proposes an overview of the different upper limb models available in the literature and a discussion about their applicability to MKO. The advantages of each joint model with respect to its biofidelity to functional anatomy are detailed both for the shoulder and the forearm areas. Models capabilities of personalisation and of adaptation to pathological cases are also discussed. Concerning model efficiency in terms of STA reduction in MKO algorithms, a lack of quantitative assessment in the literature is noted. In priority, future studies should concern the evaluation and quantification of STA reduction depending on upper limb joint constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  20. Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.

    PubMed

    Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P

    2017-01-01

    To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.

  1. Optimising the Blended Learning Environment: The Arab Open University Experience

    ERIC Educational Resources Information Center

    Hamdi, Tahrir; Abu Qudais, Mohammed

    2018-01-01

    This paper will offer some insights into possible ways to optimise the blended learning environment based on experience with this modality of teaching at Arab Open University/Jordan branch and also by reflecting upon the results of several meta-analytical studies, which have shown blended learning environments to be more effective than their face…

  2. MIND performance and prototyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cervera-Villanueva, A.

    2008-02-21

    The performance of MIND (Magnetised Iron Neutrino Detector) at a neutrino factory has been revisited in a new analysis. In particular, the low neutrino energy region is studied, obtaining an efficiency plateau around 5 GeV for a background level below 10{sup -3}. A first look has been given into the detector optimisation and prototyping.

  3. Total centralisation and optimisation of an oncology management suite via Citrix®

    NASA Astrophysics Data System (ADS)

    James, C.; Frantzis, J.; Ripps, L.; Fenton, P.

    2014-03-01

    The management of patient information and treatment planning is traditionally an intra-departmental requirement of a radiation oncology service. Epworth Radiation Oncology systems must support the transient nature of Visiting Medical Officers (VMOs). This unique work practice created challenges when implementing the vision of a completely paperless solution that allows for a responsive and efficient service delivery. ARIA® and EclipseTM (Varian Medical Systems, Palo Alto, CA, USA) have been deployed across four dedicated Citrix® (Citrix Systems, Santa Clara, CA, USA) servers allowing VMOs to access these applications remotely. A range of paperless solutions were developed within ARIA® to facilitate clinical and organisational management whilst optimising efficient work practices. The IT infrastructure and paperless workflow has enabled VMOs to securely access the VarianTM (Varian Medical Systems, Palo Alto, CA, USA) oncology software and experience full functionality from any location on multiple devices. This has enhanced access to patient information and improved the responsiveness of the service. Epworth HealthCare has developed a unique solution to enable remote access to a centralised oncology management suite, while maintaining a secure and paperless working environment.

  4. Applying e-procurement system in the healthcare: the EPOS paradigm

    NASA Astrophysics Data System (ADS)

    Ketikidis, Panayiotis H.; Kontogeorgis, Apostolos; Stalidis, George; Kaggelides, Kostis

    2010-03-01

    One of the goals of procurement is to establish a competitive price, while e-procurement utilises electronic commerce to identify potential sources of supply, to purchase goods and services, to exchange contractual information and to interact with suppliers. Extensive academic work has been extensively devoted to e-procurement in diverse industries. However, applying e-procurement in the healthcare sector remains unexplored. It lacks an efficient e-procurement mechanism that will enable hospitals and healthcare suppliers to electronically exchange contractual information, aided by the technologies of optimisation and business rules. The development and deployment of e-procurement requires a major effort in the coordination of complex interorganisational business process. This article presents an e-procurement optimised system (EPOS) for the healthcare marketplace, a complete methodological approach for deploying and operating such system, as piloted in public and private hospitals in three European countries (Greece, Spain and Belgium) and suppliers of healthcare items in the fourth country (Italy). The efficient e-procurement mechanism that EPOS suggests enables hospitals and pharmaceutical and medical equipment suppliers to electronically exchange contractual information.

  5. Optimisation and Characterisation of Lipase-Catalysed Synthesis of a Kojic Monooleate Ester in a Solvent-Free System by Response Surface Methodology.

    PubMed

    Jumbri, Khairulazhar; Al-Haniff Rozy, Mohd Fahruddin; Ashari, Siti Efliza; Mohamad, Rosfarizan; Basri, Mahiran; Fard Masoumi, Hamid Reza

    2015-01-01

    Kojic acid is widely used to inhibit the browning effect of tyrosinase in cosmetic and food industries. In this work, synthesis of kojic monooleate ester (KMO) was carried out using lipase-catalysed esterification of kojic acid and oleic acid in a solvent-free system. Response Surface Methodology (RSM) based on central composite rotatable design (CCRD) was used to optimise the main important reaction variables, such as enzyme amount, reaction temperature, substrate molar ratio, and reaction time along with immobilised lipase from Candida Antarctica (Novozym 435) as a biocatalyst. The RSM data indicated that the reaction temperature was less significant in comparison to other factors for the production of a KMO ester. By using this statistical analysis, a quadratic model was developed in order to correlate the preparation variable to the response (reaction yield). The optimum conditions for the enzymatic synthesis of KMO were as follows: an enzyme amount of 2.0 wt%, reaction temperature of 83.69°C, substrate molar ratio of 1:2.37 (mmole kojic acid:oleic acid) and a reaction time of 300.0 min. Under these conditions, the actual yield percentage obtained was 42.09%, which is comparably well with the maximum predicted value of 44.46%. Under the optimal conditions, Novozym 435 could be reused for 5 cycles for KMO production percentage yield of at least 40%. The results demonstrated that statistical analysis using RSM can be used efficiently to optimise the production of a KMO ester. Moreover, the optimum conditions obtained can be applied to scale-up the process and minimise the cost.

  6. Optimisation and Characterisation of Lipase-Catalysed Synthesis of a Kojic Monooleate Ester in a Solvent-Free System by Response Surface Methodology

    PubMed Central

    Jumbri, Khairulazhar; Al-Haniff Rozy, Mohd Fahruddin; Ashari, Siti Efliza; Mohamad, Rosfarizan; Basri, Mahiran; Fard Masoumi, Hamid Reza

    2015-01-01

    Kojic acid is widely used to inhibit the browning effect of tyrosinase in cosmetic and food industries. In this work, synthesis of kojic monooleate ester (KMO) was carried out using lipase-catalysed esterification of kojic acid and oleic acid in a solvent-free system. Response Surface Methodology (RSM) based on central composite rotatable design (CCRD) was used to optimise the main important reaction variables, such as enzyme amount, reaction temperature, substrate molar ratio, and reaction time along with immobilised lipase from Candida Antarctica (Novozym 435) as a biocatalyst. The RSM data indicated that the reaction temperature was less significant in comparison to other factors for the production of a KMO ester. By using this statistical analysis, a quadratic model was developed in order to correlate the preparation variable to the response (reaction yield). The optimum conditions for the enzymatic synthesis of KMO were as follows: an enzyme amount of 2.0 wt%, reaction temperature of 83.69°C, substrate molar ratio of 1:2.37 (mmole kojic acid:oleic acid) and a reaction time of 300.0 min. Under these conditions, the actual yield percentage obtained was 42.09%, which is comparably well with the maximum predicted value of 44.46%. Under the optimal conditions, Novozym 435 could be reused for 5 cycles for KMO production percentage yield of at least 40%. The results demonstrated that statistical analysis using RSM can be used efficiently to optimise the production of a KMO ester. Moreover, the optimum conditions obtained can be applied to scale-up the process and minimise the cost. PMID:26657030

  7. Automation of Silica Bead-based Nucleic Acid Extraction on a Centrifugal Lab-on-a-Disc Platform

    NASA Astrophysics Data System (ADS)

    Kinahan, David J.; Mangwanya, Faith; Garvey, Robert; Chung, Danielle WY; Lipinski, Artur; Julius, Lourdes AN; King, Damien; Mohammadi, Mehdi; Mishra, Rohit; Al-Ofi, May; Miyazaki, Celina; Ducrée, Jens

    2016-10-01

    We describe a centrifugal microfluidic ‘Lab-on-a-Disc’ (LoaD) technology for DNA purification towards eventual integration into a Sample-to-Answer platform for detection of the pathogen Escherichia coli O157:H7 from food samples. For this application, we use a novel microfluidic architecture which combines ‘event-triggered’ dissolvable film (DF) valves with a reaction chamber gated by a centrifugo-pneumatic siphon valve (CPSV). This architecture permits comprehensive flow control by simple changes in the speed of the platform innate spindle motor. Even before method optimisation, characterisation by DNA fluorescence reveals an extraction efficiency of 58%, which is close to commercial spin columns.

  8. Toward the optimization of PC-based training

    NASA Astrophysics Data System (ADS)

    Cho, Kohei; Murai, Shunji

    Since 1992, the National Space Development Agency of Japan (NASDA) and the Economic and Social Commission for Asia and the Pacific (ESCAP) have been co-organising the Regional Remote Sensing Seminar on Tropical Ecosystem Management (Program Chairman: Prof. Shunji Murai) every year in some country in Asia. In these seminars, the members of the ISPRS Working Group VI/2 'Computer Assisted Teaching' have been performing a PC-based hands-on-training on remote sensing and GIS for beginners. The main objective of the training was to transfer not only knowledge but also the technology of remote sensing and GIS to the beginners. The software and CD-ROM data set provided at the training were well designed not only for training but also for practical data analysis. This paper presents an outline of the training and discusses the optimisation of PC-based training for remote sensing and GIS.

  9. Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

    2014-09-01

    This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

  10. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  11. High performance computing enabling exhaustive analysis of higher order single nucleotide polymorphism interaction in Genome Wide Association Studies.

    PubMed

    Goudey, Benjamin; Abedini, Mani; Hopper, John L; Inouye, Michael; Makalic, Enes; Schmidt, Daniel F; Wagner, John; Zhou, Zeyu; Zobel, Justin; Reumann, Matthias

    2015-01-01

    Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.

  12. Construction and application of Red5 cluster based on OpenStack

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqing; Song, Jianxin

    2017-08-01

    With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.

  13. Performance benchmark of LHCb code on state-of-the-art x86 architectures

    NASA Astrophysics Data System (ADS)

    Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.

    2015-12-01

    For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.

  14. Influence of denture improvement on the nutritional status and quality of life of geriatric patients.

    PubMed

    Wöstmann, Bernd; Michel, Karin; Brinkert, Bernd; Melchheier-Weskott, Andrea; Rehmann, Peter; Balkenhol, Markus

    2008-10-01

    Recent research suggests that there is a correlation between nutrition, oral health, dietary habits, patients' satisfaction and their socio-economic status. However, the dependent and independent variables have remained unclear. This exploratory interventional study aimed to identify the impact of denture improvement on the nutritional status as well as the oral health-related quality of life in geriatric patients. Forty-seven patients who were capable of feeding themselves (minimum age: 60 years) and with dentures requiring repair or replacement were selected from a random sample of 100 residents of two nursing homes. Before and 6 months after the dentures were optimised a Mini Nutritional Assessment (MNA) and a masticatory function test were carried out. Nutritional markers (pre-albumin, serum albumin, zinc) were determined and an OHIP-G14 (Oral Health Impact Profile, German version) was recorded in order to determine the effect of the optimised oral situation on the patient's nutritional status and oral health-related quality of life. Despite the highly significant improvement in masticatory ability after the optimisation of the dentures, no general improvement regarding the nutritional status was observed since the albumin, zinc and MNA values remained unchanged and pre-albumin even decreased. Since masticatory ability and masticatory efficiency are not the only factors affecting this, prosthetic measures alone apparently cannot effect a lasting improvement in nutritional status as masticatory ability and masticatory efficiency are not the only factors of influence. Nutrition is not only a matter of masticatory function, but also depends on other influencing factors (e.g. habits, taste and cultural customs as well as financial and organisational aspects).

  15. Selecting a climate model subset to optimise key ensemble properties

    NASA Astrophysics Data System (ADS)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  16. Underworld results as a triple (shopping list, posterior, priors)

    NASA Astrophysics Data System (ADS)

    Quenette, S. M.; Moresi, L. N.; Abramson, D.

    2013-12-01

    When studying long-term lithosphere deformation and other such large-scale, spatially distinct and behaviour rich problems, there is a natural trade-off between the meaning of a model, the observations used to validate the model and the ability to compute over this space. For example, many models of varying lithologies, rheological properties and underlying physics may reasonably match (or not match) observables. To compound this problem, each realisation is computationally intensive, requiring high resolution, algorithm tuning and code tuning to contemporary computer hardware. It is often intractable to use sampling based assimilation methods, but with better optimisation, the window of tractability becomes wider. The ultimate goal is to find a sweet-spot where a formal assimilation method is used, and where a model affines to observations. Its natural to think of this as an inverse problem, in which the underlying physics may be fixed and the rheological properties and possibly the lithologies themselves are unknown. What happens when we push this approach and treat some portion of the underlying physics as an unknown? At its extreme this is an intractable problem. However, there is an analogy here with how we develop software for these scientific problems. What happens when we treat the changing part of a largely complete code as an unknown, where the changes are working towards this sweet-spot? When posed as a Bayesian inverse problem the result is a triple - the model changes, the real priors and the real posterior. Not only does this give meaning to the process by which a code changes, it forms a mathematical bridge from an inverse problem to compiler optimisations given such changes. As a stepping stone example we show a regional scale heat flow model with constraining observations, and the inverse process including increasingly complexity in the software. The implementation uses Underworld-GT (Underworld plus research extras to import geology and export geothermic measures, etc). Underworld uses StGermain an early (partial) implementation of the theories described here.

  17. An Effective Belt Conveyor for Underground Ore Transportation Systems

    NASA Astrophysics Data System (ADS)

    Krol, Robert; Kawalec, Witold; Gladysiewicz, Lech

    2017-12-01

    Raw material transportation generates a substantial share of costs in the mining industry. Mining companies are therefore determined to improve the effectiveness of their transportation system, focusing on solutions that increase both its energy efficiency and reliability while keeping maintenance costs low. In the underground copper ore operations in Poland’s KGHM mines vast and complex belt conveyor systems have been used for horizontal haulage of the run-of-mine ore from mining departments to shafts. Basing upon a long-time experience in the field of analysing, testing, designing and computing of belt conveyor equipment with regard to specific operational conditions, the improvements to the standard design of an underground belt conveyor for ore transportation have been proposed. As the key elements of a belt conveyor, the energy-efficient conveyor belt and optimised carrying idlers have been developed for the new generation of underground conveyors. The proposed solutions were tested individually on the specially constructed test stands in the laboratory and in the experimental belt conveyor that was built up with the use of prototype parts and commissioned for the regular ore haulage in a mining department in the KGHM underground mine “Lubin”. Its work was monitored and the recorded operational parameters (loadings, stresses and strains, energy dissipation, belt tracking) were compared with those previously collected on a reference (standard) conveyor. These in-situ measurements have proved that the proposed solutions will return with significant energy savings and lower maintenance costs. Calculations made on the basis of measurement results in the specialized belt conveyor designing software allow to estimate the possible savings if the modernized conveyors supersede the standard ones in a large belt conveying system.

  18. A new global approach to obtain three-dimensional displacement maps by integrating GPS and DInSAR data

    NASA Astrophysics Data System (ADS)

    Guglielmino, F.; Nunnari, G.; Puglisi, G.; Spata, A.

    2009-04-01

    We propose a new technique, based on the elastic theory, to efficiently produce an estimate of three-dimensional surface displacement maps by integrating sparse Global Position System (GPS) measurements of deformations and Differential Interferometric Synthetic Aperture Radar (DInSAR) maps of movements of the Earth's surface. The previous methodologies known in literature, for combining data from GPS and DInSAR surveys, require two steps: the first, in which sparse GPS measurements are interpolated in order to fill in GPS displacements at the DInSAR grid, and the second, to estimate the three-dimensional surface displacement maps by using a suitable optimization technique. One of the advantages of the proposed approach is that both these steps are unified. We propose a linear matrix equation which accounts for both GPS and DInSAR data whose solution provide simultaneously the strain tensor, the displacement field and the rigid body rotation tensor throughout the entire investigated area. The mentioned linear matrix equation is solved by using the Weighted Least Square (WLS) thus assuring both numerical robustness and high computation efficiency. The proposed methodology was tested on both synthetic and experimental data, these last from GPS and DInSAR measurements carried out on Mt. Etna. The goodness of the results has been evaluated by using standard errors. These tests also allow optimising the choice of specific parameters of this algorithm. This "open" structure of the method will allow in the near future to take account of other available data sets, such as additional interferograms, or other geodetic data (e.g. levelling, tilt, etc.), in order to achieve even higher accuracy.

  19. The use of genomic coancestry matrices in the optimisation of contributions to maintain genetic diversity at specific regions of the genome.

    PubMed

    Gómez-Romano, Fernando; Villanueva, Beatriz; Fernández, Jesús; Woolliams, John A; Pong-Wong, Ricardo

    2016-01-13

    Optimal contribution methods have proved to be very efficient for controlling the rates at which coancestry and inbreeding increase and therefore, for maintaining genetic diversity. These methods have usually relied on pedigree information for estimating genetic relationships between animals. However, with the large amount of genomic information now available such as high-density single nucleotide polymorphism (SNP) chips that contain thousands of SNPs, it becomes possible to calculate more accurate estimates of relationships and to target specific regions in the genome where there is a particular interest in maximising genetic diversity. The objective of this study was to investigate the effectiveness of using genomic coancestry matrices for: (1) minimising the loss of genetic variability at specific genomic regions while restricting the overall loss in the rest of the genome; or (2) maximising the overall genetic diversity while restricting the loss of diversity at specific genomic regions. Our study shows that the use of genomic coancestry was very successful at minimising the loss of diversity and outperformed the use of pedigree-based coancestry (genetic diversity even increased in some scenarios). The results also show that genomic information allows a targeted optimisation to maintain diversity at specific genomic regions, whether they are linked or not. The level of variability maintained increased when the targeted regions were closely linked. However, such targeted management leads to an important loss of diversity in the rest of the genome and, thus, it is necessary to take further actions to constrain this loss. Optimal contribution methods also proved to be effective at restricting the loss of diversity in the rest of the genome, although the resulting rate of coancestry was higher than the constraint imposed. The use of genomic matrices when optimising contributions permits the control of genetic diversity and inbreeding at specific regions of the genome through the minimisation of partial genomic coancestry matrices. The formula used to predict coancestry in the next generation produces biased results and therefore it is necessary to refine the theory of genetic contributions when genomic matrices are used to optimise contributions.

  20. Simulation of geothermal water extraction in heterogeneous reservoirs using dynamic unstructured mesh optimisation

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.

    2017-12-01

    It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required to reduce an error metric based on the Hessian of the field. This allows the local pressure drawdown to be captured without user¬ driven modification of the mesh. We demonstrate that the method has wide application in reservoir ¬scale models of geothermal fields, and regional models of groundwater resources.

Top