Sample records for robust optimisation strategy

  1. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  2. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  3. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  4. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  5. H2/H∞ control for grid-feeding converter considering system uncertainty

    NASA Astrophysics Data System (ADS)

    Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang

    2017-05-01

    Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.

  6. Adding flexibility to the search for robust portfolios in non-linear water resource planning

    NASA Astrophysics Data System (ADS)

    Tomlinson, James; Harou, Julien

    2017-04-01

    To date robust optimisation of water supply systems has sought to find portfolios or strategies that are robust to a range of uncertainties or scenarios. The search for a single portfolio that is robust in all scenarios is necessarily suboptimal compared to portfolios optimised for a single scenario deterministic future. By contrast establishing a separate portfolio for each future scenario is unhelpful to the planner who must make a single decision today under deep uncertainty. In this work we show that a middle ground is possible by allowing a small number of different portfolios to be found that are each robust to a different subset of the global scenarios. We use evolutionary algorithms and a simple water resource system model to demonstrate this approach. The primary contribution is to demonstrate that flexibility can be added to the search for portfolios, in complex non-linear systems, at the expense of complete robustness across all future scenarios. In this context we define flexibility as the ability to design a portfolio in which some decisions are delayed, but those decisions that are not delayed are themselves shown to be robust to the future. We recognise that some decisions in our portfolio are more important than others. An adaptive portfolio is found by allowing no flexibility for these near-term "important" decisions, but maintaining flexibility in the remaining longer term decisions. In this sense we create an effective 2-stage decision process for a non-linear water resource supply system. We show how this reduces a measure of regret versus the inflexible robust solution for the same system.

  7. Robustness analysis of bogie suspension components Pareto optimised values

    NASA Astrophysics Data System (ADS)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  8. Spatial distribution, sampling precision and survey design optimisation with non-normal variables: The case of anchovy (Engraulis encrasicolus) recruitment in Spanish Mediterranean waters

    NASA Astrophysics Data System (ADS)

    Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan

    2016-02-01

    In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.

  9. Review of sample preparation strategies for MS-based metabolomic studies in industrial biotechnology.

    PubMed

    Causon, Tim J; Hann, Stephan

    2016-09-28

    Fermentation and cell culture biotechnology in the form of so-called "cell factories" now play an increasingly significant role in production of both large (e.g. proteins, biopharmaceuticals) and small organic molecules for a wide variety of applications. However, associated metabolic engineering optimisation processes relying on genetic modification of organisms used in cell factories, or alteration of production conditions remain a challenging undertaking for improving the final yield and quality of cell factory products. In addition to genomic, transcriptomic and proteomic workflows, analytical metabolomics continues to play a critical role in studying detailed aspects of critical pathways (e.g. via targeted quantification of metabolites), identification of biosynthetic intermediates, and also for phenotype differentiation and the elucidation of previously unknown pathways (e.g. via non-targeted strategies). However, the diversity of primary and secondary metabolites and the broad concentration ranges encompassed during typical biotechnological processes means that simultaneous extraction and robust analytical determination of all parts of interest of the metabolome is effectively impossible. As the integration of metabolome data with transcriptome and proteome data is an essential goal of both targeted and non-targeted methods addressing production optimisation goals, additional sample preparation steps beyond necessary sampling, quenching and extraction protocols including clean-up, analyte enrichment, and derivatisation are important considerations for some classes of metabolites, especially those present in low concentrations or exhibiting poor stability. This contribution critically assesses the potential of current sample preparation strategies applied in metabolomic studies of industrially-relevant cell factory organisms using mass spectrometry-based platforms primarily coupled to liquid-phase sample introduction (i.e. flow injection, liquid chromatography, or capillary electrophoresis). Particular focus is placed on the selectivity and degree of enrichment attainable, as well as demands of speed, absolute quantification, robustness and, ultimately, consideration of fully-integrated bioanalytical solutions to optimise sample handling and throughput. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Design and experimental validation of linear and nonlinear vehicle steering control strategies

    NASA Astrophysics Data System (ADS)

    Menhour, Lghani; Lechner, Daniel; Charara, Ali

    2012-06-01

    This paper proposes the design of three control laws dedicated to vehicle steering control, two based on robust linear control strategies and one based on nonlinear control strategies, and presents a comparison between them. The two robust linear control laws (indirect and direct methods) are built around M linear bicycle models, each of these control laws is composed of two M proportional integral derivative (PID) controllers: one M PID controller to control the lateral deviation and the other M PID controller to control the vehicle yaw angle. The indirect control law method is designed using an oscillation method and a nonlinear optimisation subject to H ∞ constraint. The direct control law method is designed using a linear matrix inequality optimisation in order to achieve H ∞ performances. The nonlinear control method used for the correction of the lateral deviation is based on a continuous first-order sliding-mode controller. The different methods are designed using a linear bicycle vehicle model with variant parameters, but the aim is to simulate the nonlinear vehicle behaviour under high dynamic demands with a four-wheel vehicle model. These steering vehicle controls are validated experimentally using the data acquired using a laboratory vehicle, Peugeot 307, developed by National Institute for Transport and Safety Research - Department of Accident Mechanism Analysis Laboratory's (INRETS-MA) and their performance results are compared. Moreover, an unknown input sliding-mode observer is introduced to estimate the road bank angle.

  11. Development of perspective-based water management strategies for the Rhine and Meuse basins.

    PubMed

    van Deursen, W P A; Middelkoop, H

    2005-01-01

    Water management is surrounded by uncertainties. Water management thus has to answer the question: given the uncertainties, what is the best management strategy? This paper describes the application of the perspectives method on water management in the Rhine and Meuse basins. In the perspectives method, a structured framework to analyse water management strategies under uncertainty is provided. Various strategies are clustered in perspectives according to their underlying assumptions. This framework allows for an analysis of current water management strategies, but also allows for evaluation of the robustness of proposed future water strategies. It becomes clear that no water management strategy is superior to the others, but that inherent choices on risk acceptance and costs make a real political dilemma which will not be solved by further optimisation.

  12. Exploring critical pathways for urban water management to identify robust strategies under deep uncertainties.

    PubMed

    Urich, Christian; Rauch, Wolfgang

    2014-12-01

    Long-term projections for key drivers needed in urban water infrastructure planning such as climate change, population growth, and socio-economic changes are deeply uncertain. Traditional planning approaches heavily rely on these projections, which, if a projection stays unfulfilled, can lead to problematic infrastructure decisions causing high operational costs and/or lock-in effects. New approaches based on exploratory modelling take a fundamentally different view. Aim of these is, to identify an adaptation strategy that performs well under many future scenarios, instead of optimising a strategy for a handful. However, a modelling tool to support strategic planning to test the implication of adaptation strategies under deeply uncertain conditions for urban water management does not exist yet. This paper presents a first step towards a new generation of such strategic planning tools, by combing innovative modelling tools, which coevolve the urban environment and urban water infrastructure under many different future scenarios, with robust decision making. The developed approach is applied to the city of Innsbruck, Austria, which is spatially explicitly evolved 20 years into the future under 1000 scenarios to test the robustness of different adaptation strategies. Key findings of this paper show that: (1) Such an approach can be used to successfully identify parameter ranges of key drivers in which a desired performance criterion is not fulfilled, which is an important indicator for the robustness of an adaptation strategy; and (2) Analysis of the rich dataset gives new insights into the adaptive responses of agents to key drivers in the urban system by modifying a strategy. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Treatment planning optimisation in proton therapy

    PubMed Central

    McGowan, S E; Burnet, N G; Lomax, A J

    2013-01-01

    ABSTRACT. The goal of radiotherapy is to achieve uniform target coverage while sparing normal tissue. In proton therapy, the same sources of geometric uncertainty are present as in conventional radiotherapy. However, an important and fundamental difference in proton therapy is that protons have a finite range, highly dependent on the electron density of the material they are traversing, resulting in a steep dose gradient at the distal edge of the Bragg peak. Therefore, an accurate knowledge of the sources and magnitudes of the uncertainties affecting the proton range is essential for producing plans which are robust to these uncertainties. This review describes the current knowledge of the geometric uncertainties and discusses their impact on proton dose plans. The need for patient-specific validation is essential and in cases of complex intensity-modulated proton therapy plans the use of a planning target volume (PTV) may fail to ensure coverage of the target. In cases where a PTV cannot be used, other methods of quantifying plan quality have been investigated. A promising option is to incorporate uncertainties directly into the optimisation algorithm. A further development is the inclusion of robustness into a multicriteria optimisation framework, allowing a multi-objective Pareto optimisation function to balance robustness and conformity. The question remains as to whether adaptive therapy can become an integral part of a proton therapy, to allow re-optimisation during the course of a patient's treatment. The challenge of ensuring that plans are robust to range uncertainties in proton therapy remains, although these methods can provide practical solutions. PMID:23255545

  14. Stability and optimised H∞ control of tripped and untripped vehicle rollover

    NASA Astrophysics Data System (ADS)

    Jin, Zhilin; Zhang, Lei; Zhang, Jiale; Khajepour, Amir

    2016-10-01

    Vehicle rollover is a serious traffic accident. In order to accurately evaluate the possibility of untripped and some special tripped vehicle rollovers, and to prevent vehicle rollover under unpredictable variations of parameters and harsh driving conditions, a new rollover index and an anti-roll control strategy are proposed in this paper. Taking deflections of steering and suspension induced by the roll at the axles into consideration, a six degrees of freedom dynamic model is established, including lateral, yaw, roll, and vertical motions of sprung and unsprung masses. From the vehicle dynamics theory, a new rollover index is developed to predict vehicle rollover risk under both untripped and special tripped situations. This new rollover index is validated by Carsim simulations. In addition, an H-infinity controller with electro hydraulic brake system is optimised by genetic algorithm to improve the anti-rollover performance of the vehicle. The stability and robustness of the active rollover prevention control system are analysed by some numerical simulations. The results show that the control system can improve the critical speed of vehicle rollover obviously, and has a good robustness for variations in the number of passengers and longitude position of the centre of gravity.

  15. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  16. Optimising qualitative longitudinal analysis: Insights from a study of traumatic brain injury recovery and adaptation.

    PubMed

    Fadyl, Joanna K; Channon, Alexis; Theadom, Alice; McPherson, Kathryn M

    2017-04-01

    Knowledge about aspects that influence recovery and adaptation in the postacute phase of disabling health events is key to understanding how best to provide appropriate rehabilitation and health services. Qualitative longitudinal research makes it possible to look for patterns, key time points and critical moments that could be vital for interventions and supports. However, strategies that support robust data management and analysis for longitudinal qualitative research in health-care are not well documented in the literature. This article reviews three challenges encountered in a large longitudinal qualitative descriptive study about experiences of recovery and adaptation after traumatic brain injury in New Zealand, and the strategies and technologies used to address them. These were (i) tracking coding and analysis decisions during an extended analysis period; (ii) navigating interpretations over time and in response to new data; and (iii) exploiting data volume and complexity. Concept mapping during coding review, a considered combination of information technologies, employing both cross-sectional and narrative analysis, and an expectation that subanalyses would be required for key topics helped us manage the study in a way that facilitated useful and novel insights. These strategies could be applied in other qualitative longitudinal studies in healthcare inquiry to optimise data analysis and stimulate important insights. © 2016 John Wiley & Sons Ltd.

  17. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE PAGES

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain; ...

    2017-09-23

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  18. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  19. A robust optimisation approach to the problem of supplier selection and allocation in outsourcing

    NASA Astrophysics Data System (ADS)

    Fu, Yelin; Keung Lai, Kin; Liang, Liang

    2016-03-01

    We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.

  20. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  1. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  2. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  3. Robust imaging and gene delivery to study human lymphoblastoid cell lines.

    PubMed

    Jolly, Lachlan A; Sun, Ying; Carroll, Renée; Homan, Claire C; Gecz, Jozef

    2018-06-20

    Lymphoblastoid cell lines (LCLs) have been by far the most prevalent cell type used to study the genetics underlying normal and disease-relevant human phenotypic variation, across personal to epidemiological scales. In contrast, only few studies have explored the use of LCLs in functional genomics and mechanistic studies. Two major reasons are technical, as (1) interrogating the sub-cellular spatial information of LCLs is challenged by their non-adherent nature, and (2) LCLs are refractory to gene transfection. Methodological details relating to techniques that overcome these limitations are scarce, largely inadequate (without additional knowledge and expertise), and optimisation has never been described. Here we compare, optimise, and convey such methods in-depth. We provide a robust method to adhere LCLs to coverslips, which maintained cellular integrity, morphology, and permitted visualisation of sub-cellular structures and protein localisation. Next, we developed the use of lentiviral-based gene delivery to LCLs. Through empirical and combinatorial testing of multiple transduction conditions, we improved transduction efficiency from 3% up to 48%. Furthermore, we established strategies to purify transduced cells, to achieve sustainable cultures containing >85% transduced cells. Collectively, our methodologies provide a vital resource that enables the use of LCLs in functional cell and molecular biology experiments. Potential applications include the characterisation of genetic variants of unknown significance, the interrogation of cellular disease pathways and mechanisms, and high-throughput discovery of genetic modifiers of disease states among others.

  4. Optimising import risk mitigation: anticipating the unintended consequences and competing risks of informal trade.

    PubMed

    Hueston, W; Travis, D; van Klink, E

    2011-04-01

    The effectiveness of risk mitigation may be compromised by informal trade, including illegal activities, parallel markets and extra-legal activities. While no regulatory system is 100% effective in eliminating the risk of disease transmission through animal and animal product trade, extreme risk aversion in formal import health regulations may increase informal trade, with the unintended consequence of creating additional risks outside regulatory purview. Optimal risk mitigation on a national scale requires scientifically sound yet flexible mitigation strategies that can address the competing risks of formal and informal trade. More robust risk analysis and creative engagement of nontraditional partners provide avenues for addressing informal trade.

  5. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  6. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  7. The robust model predictive control based on mixed H2/H∞ approach with separated performance formulations and its ISpS analysis

    NASA Astrophysics Data System (ADS)

    Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong

    2017-12-01

    In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.

  8. Multi-objective optimisation and decision-making of space station logistics strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Yue-he; Luo, Ya-zhong

    2016-10-01

    Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.

  9. Economic impact of optimising antiretroviral treatment in human immunodeficiency virus-infected adults with suppressed viral load in Spain, by implementing the grade A-1 evidence recommendations of the 2015 GESIDA/National AIDS Plan.

    PubMed

    Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz

    2018-03-01

    The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  10. Pharmacokinetic studies in children: recommendations for practice and research.

    PubMed

    Barker, Charlotte I S; Standing, Joseph F; Kelly, Lauren E; Hanly Faught, Lauren; Needham, Allison C; Rieder, Michael J; de Wildt, Saskia N; Offringa, Martin

    2018-04-19

    Optimising the dosing of medicines for neonates and children remains a challenge. The importance of pharmacokinetic (PK) and pharmacodynamic (PD) research is recognised both in medicines regulation and paediatric clinical pharmacology, yet there remain barriers to undertaking high-quality PK and PD studies. While these studies are essential in understanding the dose-concentration-effect relationship and should underpin dosing recommendations, this review examines how challenges affecting the design and conduct of paediatric pharmacological studies can be overcome using targeted pharmacometric strategies. Model-based approaches confer benefits at all stages of the drug life-cycle, from identifying the first dose to be used in children, to clinical trial design, and optimising the dosing regimens of older, off-patent medications. To benefit patients, strategies to ensure that new PK, PD and trial data are incorporated into evidence-based dosing recommendations are needed. This review summarises practical strategies to address current challenges, particularly the use of model-based (pharmacometric) approaches in study design and analysis. Recommendations for practice and directions for future paediatric pharmacological research are given, based on current literature and our joint international experience. Success of PK research in children requires a robust infrastructure, with sustainable funding mechanisms at its core, supported by political and regulatory initiatives, and international collaborations. There is a unique opportunity to advance paediatric medicines research at an unprecedented pace, bringing the age of evidence-based paediatric pharmacotherapy into sight. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  12. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  13. Modelling and strategy optimisation for a kind of networked evolutionary games with memories under the bankruptcy mechanism

    NASA Astrophysics Data System (ADS)

    Fu, Shihua; Li, Haitao; Zhao, Guodong

    2018-05-01

    This paper investigates the evolutionary dynamic and strategy optimisation for a kind of networked evolutionary games whose strategy updating rules incorporate 'bankruptcy' mechanism, and the situation that each player's bankruptcy is due to the previous continuous low profits gaining from the game is considered. First, by using semi-tensor product of matrices method, the evolutionary dynamic of this kind of games is expressed as a higher order logical dynamic system and then converted into its algebraic form, based on which, the evolutionary dynamic of the given games can be discussed. Second, the strategy optimisation problem is investigated, and some free-type control sequences are designed to maximise the total payoff of the whole game. Finally, an illustrative example is given to show that our new results are very effective.

  14. Disease activity-guided dose optimisation of adalimumab and etanercept is a cost-effective strategy compared with non-tapering tight control rheumatoid arthritis care: analyses of the DRESS study.

    PubMed

    Kievit, Wietske; van Herwaarden, Noortje; van den Hoogen, Frank Hj; van Vollenhoven, Ronald F; Bijlsma, Johannes Wj; van den Bemt, Bart Jf; van der Maas, Aatke; den Broeder, Alfons A

    2016-11-01

    A disease activity-guided dose optimisation strategy of adalimumab or etanercept (TNFi (tumour necrosis factor inhibitors)) has shown to be non-inferior in maintaining disease control in patients with rheumatoid arthritis (RA) compared with usual care. However, the cost-effectiveness of this strategy is still unknown. This is a preplanned cost-effectiveness analysis of the Dose REduction Strategy of Subcutaneous TNF inhibitors (DRESS) study, a randomised controlled, open-label, non-inferiority trial performed in two Dutch rheumatology outpatient clinics. Patients with low disease activity using TNF inhibitors were included. Total healthcare costs were measured and quality adjusted life years (QALY) were based on EQ5D utility scores. Decremental cost-effectiveness analyses were performed using bootstrap analyses; incremental net monetary benefit (iNMB) was used to express cost-effectiveness. 180 patients were included, and 121 were allocated to the dose optimisation strategy and 59 to control. The dose optimisation strategy resulted in a mean cost saving of -€12 280 (95 percentile -€10 502; -€14 104) per patient per 18 months. There is an 84% chance that the dose optimisation strategy results in a QALY loss with a mean QALY loss of -0.02 (-0.07 to 0.02). The decremental cost-effectiveness ratio (DCER) was €390 493 (€5 085 184; dominant) of savings per QALY lost. The mean iNMB was €10 467 (€6553-€14 037). Sensitivity analyses using 30% and 50% lower prices for TNFi remained cost-effective. Disease activity-guided dose optimisation of TNFi results in considerable cost savings while no relevant loss of quality of life was observed. When the minimal QALY loss is compensated with the upper limit of what society is willing to pay or accept in the Netherlands, the net savings are still high. NTR3216; Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Is ICRP guidance on the use of reference levels consistent?

    PubMed

    Hedemann-Jensen, Per; McEwan, Andrew C

    2011-12-01

    In ICRP 103, which has replaced ICRP 60, it is stated that no fundamental changes have been introduced compared with ICRP 60. This is true except that the application of reference levels in emergency and existing exposure situations seems to be applied inconsistently, and also in the related publications ICRP 109 and ICRP 111. ICRP 103 emphasises that focus should be on the residual doses after the implementation of protection strategies in emergency and existing exposure situations. If possible, the result of an optimised protection strategy should bring the residual dose below the reference level. Thus the reference level represents the maximum acceptable residual dose after an optimised protection strategy has been implemented. It is not an 'off-the-shelf item' that can be set free of the prevailing situation. It should be determined as part of the process of optimising the protection strategy. If not, protection would be sub-optimised. However, in ICRP 103 some inconsistent concepts have been introduced, e.g. in paragraph 279 which states: 'All exposures above or below the reference level should be subject to optimisation of protection, and particular attention should be given to exposures above the reference level'. If, in fact, all exposures above and below reference levels are subject to the process of optimisation, reference levels appear superfluous. It could be considered that if optimisation of protection below a fixed reference level is necessary, then the reference level has been set too high at the outset. Up until the last phase of the preparation of ICRP 103 the concept of a dose constraint was recommended to constrain the optimisation of protection in all types of exposure situations. In the final phase, the term 'dose constraint' was changed to 'reference level' for emergency and existing exposure situations. However, it seems as if in ICRP 103 it was not fully recognised that dose constraints and reference levels are conceptually different. The use of reference levels in radiological protection is reviewed. It is concluded that the recommendations in ICRP 103 and related ICRP publications seem to be inconsistent regarding the use of reference levels in existing and emergency exposure situations.

  16. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  17. Optimisation in the Design of Environmental Sensor Networks with Robustness Consideration

    PubMed Central

    Budi, Setia; de Souza, Paulo; Timms, Greg; Malhotra, Vishv; Turner, Paul

    2015-01-01

    This work proposes the design of Environmental Sensor Networks (ESN) through balancing robustness and redundancy. An Evolutionary Algorithm (EA) is employed to find the optimal placement of sensor nodes in the Region of Interest (RoI). Data quality issues are introduced to simulate their impact on the performance of the ESN. Spatial Regression Test (SRT) is also utilised to promote robustness in data quality of the designed ESN. The proposed method provides high network representativeness (fit for purpose) with minimum sensor redundancy (cost), and ensures robustness by enabling the network to continue to achieve its objectives when some sensors fail. PMID:26633392

  18. Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.

    2017-08-01

    We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95  <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase  <200 ms and for changes in the breathing period of  <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.

  19. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows

    PubMed Central

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon’s 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results. PMID:26132158

  20. Characterization, optimisation and process robustness of a co-processed mannitol for the development of orally disintegrating tablets.

    PubMed

    Soh, Josephine Lay Peng; Grachet, Maud; Whitlock, Mark; Lukas, Timothy

    2013-02-01

    This is a study to fully assess a commercially available co-processed mannitol for its usefulness as an off-the-shelf excipient for developing orally disintegrating tablets (ODTs) by direct compression on a pilot scale (up to 4 kg). This work encompassed material characterization, formulation optimisation and process robustness. Overall, this co-processed mannitol possessed favourable physical attributes including low hygroscopicity and compactibility. Two design-of-experiments (DoEs) were used to screen and optimise the placebo formulation. Xylitol and crospovidone concentrations were found to have the most significant impact on disintegration time (p < 0.05). Higher xylitol concentrations retarded disintegration. Avicel PH102 promoted faster disintegration than PH101, at higher levels of xylitol. Without xylitol, higher crospovidone concentrations yielded faster disintegration and reduced tablet friability. Lubrication sensitivity studies were later conducted at two fill loads, three levels for lubricant concentration and number of blend rotations. Even at 75% fill load, the design space plot showed that 1.5% lubricant and 300 blend revolutions were sufficient to manufacture ODTs with ≤ 0.1% friability and disintegrated within 15 s. This study also describes results using a modified disintegration method based on the texture analyzer as an alternative to the USP method.

  1. FMR1 CGG repeat expansion mutation detection and linked haplotype analysis for reliable and accurate preimplantation genetic diagnosis of fragile X syndrome.

    PubMed

    Rajan-Babu, Indhu-Shree; Lian, Mulias; Cheah, Felicia S H; Chen, Min; Tan, Arnold S C; Prasath, Ethiraj B; Loh, Seong Feei; Chong, Samuel S

    2017-07-19

    Fragile X mental retardation 1 (FMR1) full-mutation expansion causes fragile X syndrome. Trans-generational fragile X syndrome transmission can be avoided by preimplantation genetic diagnosis (PGD). We describe a robust PGD strategy that can be applied to virtually any couple at risk of transmitting fragile X syndrome. This novel strategy utilises whole-genome amplification, followed by triplet-primed polymerase chain reaction (TP-PCR) for robust detection of expanded FMR1 alleles, in parallel with linked multi-marker haplotype analysis of 13 highly polymorphic microsatellite markers located within 1 Mb of the FMR1 CGG repeat, and the AMELX/Y dimorphism for gender identification. The assay was optimised and validated on single lymphoblasts isolated from fragile X reference cell lines, and applied to a simulated PGD case and a clinical in vitro fertilisation (IVF)-PGD case. In the simulated PGD case, definitive diagnosis of the expected results was achieved for all 'embryos'. In the clinical IVF-PGD case, delivery of a healthy baby girl was achieved after transfer of an expansion-negative blastocyst. FMR1 TP-PCR reliably detects presence of expansion mutations and obviates reliance on informative normal alleles for determining expansion status in female embryos. Together with multi-marker haplotyping and gender determination, misdiagnosis and diagnostic ambiguity due to allele dropout is minimised, and couple-specific assay customisation can be avoided.

  2. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Nurse strategies for optimising patient participation in nursing care.

    PubMed

    Sahlsten, Monika J M; Larsson, Inga E; Sjöström, Björn; Plos, Kaety A E

    2009-09-01

    THE STUDY'S RATIONALE: Patient participation is an essential factor in nursing care and medical treatment and a legal right in many countries. Despite this, patients have experienced insufficient participation, inattention and neglect regarding their problems and may respond with dependence, passivity or taciturnity. Accordingly, nurses strategies for optimising patient participation in nursing care is an important question for the nursing profession. The aim was to explore Registered Nurses' strategies to stimulate and optimise patient participation in nursing care. The objective was to identify ward nurses' supporting practices. A qualitative research approach was applied. Three focus groups with experienced Registered Nurses providing inpatient somatic care (n = 16) were carried out. These nurses were recruited from three hospitals in West Sweden. The data were analysed using content analysis technique. The ethics of scientific work was adhered to. According to national Swedish legislation, no formal permit from an ethics committee was required. The participants gave informed consent after verbal and written information. Nurse strategies for optimising patient participation in nursing care were identified as three categories: 'Building close co-operation', 'Getting to know the person' and 'Reinforcing self-care capacity' and their 10 subcategories. The strategies point to a process of emancipation of the patient's potential by finding his/her own inherent knowledge, values, motivation and goals and linking these to actions. Nurses need to strive for guiding the patient towards attaining meaningful experiences, discoveries, learning and development. The strategies are important and useful to balance the asymmetry in the nurse-patient relationship in daily nursing practice and also in quality assurance to evaluate and improve patient participation and in education. However, further verification of the findings is recommended by means of replication or other studies in different clinical settings. © 2009 The Authors. Journal compilation © 2009 Nordic College of Caring Science.

  4. Determination of somatropin charged variants by capillary zone electrophoresis - optimisation, verification and implementation of the European pharmacopoeia method.

    PubMed

    Storms, S M; Feltus, A; Barker, A R; Joly, M-A; Girard, M

    2009-03-01

    Measurement of somatropin charged variants by isoelectric focusing was replaced with capillary zone electrophoresis in the January 2006 European Pharmacopoeia Supplement 5.3, based on results from an interlaboratory collaborative study. Due to incompatibilities and method-robustness issues encountered prior to verification, a number of method parameters required optimisation. As the use of a diode array detector at 195 nm or 200 nm led to a loss of resolution, a variable wavelength detector using a 200 nm filter was employed. Improved injection repeatability was obtained by increasing the injection time and pressure, and changing the sample diluent from water to running buffer. Finally, definition of capillary pre-treatment and rinse procedures resulted in more consistent separations over time. Method verification data are presented demonstrating linearity, specificity, repeatability, intermediate precision, limit of quantitation, sample stability, solution stability, and robustness. Based on these experiments, several modifications to the current method have been recommended and incorporated into the European Pharmacopoeia to help improve method performance across laboratories globally.

  5. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  6. Smart strategies for doctors and doctors-in-training: heuristics in medicine.

    PubMed

    Wegwarth, Odette; Gaissmaier, Wolfgang; Gigerenzer, Gerd

    2009-08-01

    How do doctors make sound decisions when confronted with probabilistic data, time pressures and a heavy workload? One theory that has been embraced by many researchers is based on optimisation, which emphasises the need to integrate all information in order to arrive at sound decisions. This notion makes heuristics, which use less than complete information, appear as second-best strategies. In this article, we challenge this pessimistic view of heuristics. We introduce two medical problems that involve decision making to the reader: one concerns coronary care issues and the other macrolide prescriptions. In both settings, decision-making tools grounded in the principles of optimisation and heuristics, respectively, have been developed to assist doctors in making decisions. We explain the structure of each of these tools and compare their performance in terms of their facilitation of correct predictions. For decisions concerning both the coronary care unit and the prescribing of macrolides, we demonstrate that sacrificing information does not necessarily imply a forfeiting of predictive accuracy, but can sometimes even lead to better decisions. Subsequently, we discuss common misconceptions about heuristics and explain when and why ignoring parts of the available information can lead to the making of more robust predictions. Heuristics are neither good nor bad per se, but, if applied in situations to which they have been adapted, can be helpful companions for doctors and doctors-in-training. This, however, requires that heuristics in medicine be openly discussed, criticised, refined and then taught to doctors-in-training rather than being simply dismissed as harmful or irrelevant. A more uniform use of explicit and accepted heuristics has the potential to reduce variations in diagnoses and to improve medical care for patients.

  7. Engaging Homeless Individuals in Discussion about Their Food Experiences to Optimise Wellbeing: A Pilot Study

    ERIC Educational Resources Information Center

    Pettinger, Clare; Parsons, Julie M.; Cunningham, Miranda; Withers, Lyndsey; D'Aprano, Gia; Letherby, Gayle; Sutton, Carole; Whiteford, Andrew; Ayres, Richard

    2017-01-01

    Objective: High levels of social and economic deprivation are apparent in many UK cities, where there is evidence of certain "marginalised" communities suffering disproportionately from poor nutrition, threatening health. Finding ways to engage with these communities is essential to identify strategies to optimise wellbeing and life…

  8. Escalated convergent artificial bee colony

    NASA Astrophysics Data System (ADS)

    Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu

    2016-03-01

    Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.

  9. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  10. Evaluation and optimisation of current milrinone prescribing for the treatment and prevention of low cardiac output syndrome in paediatric patients after open heart surgery using a physiology-based pharmacokinetic drug-disease model.

    PubMed

    Vogt, Winnie

    2014-01-01

    Milrinone is the drug of choice for the treatment and prevention of low cardiac output syndrome (LCOS) in paediatric patients after open heart surgery across Europe. Discrepancies, however, among prescribing guidance, clinical studies and practice pattern require clarification to ensure safe and effective prescribing. However, the clearance prediction equations derived from classical pharmacokinetic modelling provide limited support as they have recently failed a clinical practice evaluation. Therefore, the objective of this study was to evaluate current milrinone dosing using physiology-based pharmacokinetic (PBPK) modelling and simulation to complement the existing pharmacokinetic knowledge and propose optimised dosing regimens as a basis for improving the standard of care for paediatric patients. A PBPK drug-disease model using a population approach was developed in three steps from healthy young adults to adult patients and paediatric patients with and without LCOS after open heart surgery. Pre- and postoperative organ function values from adult and paediatric patients were collected from literature and integrated into a disease model as factorial changes from the reference values in healthy adults aged 20-40 years. The disease model was combined with the PBPK drug model and evaluated against existing pharmacokinetic data. Model robustness was assessed by parametric sensitivity analysis. In the next step, virtual patient populations were created, each with 1,000 subjects reflecting the average adult and paediatric patient characteristics with regard to age, sex, bodyweight and height. They were integrated into the PBPK drug-disease model to evaluate the effectiveness of current milrinone dosing in achieving the therapeutic target range of 100-300 ng/mL milrinone in plasma. Optimised dosing regimens were subsequently developed. The pharmacokinetics of milrinone in healthy young adults as well as adult and paediatric patients were accurately described with an average fold error of 1.1 ± 0.1 (mean ± standard deviation) and mean relative deviation of 1.5 ± 0.3 as measures of bias and precision, respectively. Normalised maximum sensitivity coefficients for model input parameters ranged from -0.84 to 0.71, which indicated model robustness. The evaluation of milrinone dosing across different paediatric age groups showed a non-linear age dependence of total plasma clearance and exposure differences of a factor 1.4 between patients with and without LCOS for a fixed dosing regimen. None of the currently used dosing regimens for milrinone achieved the therapeutic target range across all paediatric age groups and adult patients, so optimised dosing regimens were developed that considered the age-dependent and pathophysiological differences. The PBPK drug-disease model for milrinone in paediatric patients with and without LCOS after open heart surgery highlights that age, disease and surgery differently impact the pharmacokinetics of milrinone, and that current milrinone dosing for LCOS is suboptimal to maintain the therapeutic target range across the entire paediatric age range. Thus, optimised dosing strategies are proposed to ensure safe and effective prescribing.

  11. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    PubMed

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.

  12. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  13. Biomass supply chain optimisation for Organosolv-based biorefineries.

    PubMed

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Honeybee economics: optimisation of foraging in a variable world.

    PubMed

    Stabentheiner, Anton; Kovac, Helmut

    2016-06-20

    In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.

  15. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method.

    PubMed

    Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil

    2006-06-01

    In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.

  16. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    PubMed

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  17. Assessment of energy-saving strategies and operational costs in full-scale membrane bioreactors.

    PubMed

    Gabarrón, S; Ferrero, G; Dalmau, M; Comas, J; Rodriguez-Roda, I

    2014-02-15

    The energy-saving strategies and operational costs of stand-alone, hybrid, and dual stream full-scale membrane bioreactors (MBRs) with capacities ranging from 1100 to 35,000 m(3) day(-1) have been assessed for seven municipal facilities located in Northeast Spain. Although hydraulic load was found to be the main determinant factor for the energy consumption rates, several optimisation strategies have shown to be effective in terms of energy reduction as well as fouling phenomenon minimization or preservation. Specifically, modifications of the biological process (installation of control systems for biological aeration) and of the filtration process (reduction of the flux or mixed liquor suspended solids concentration and installation of control systems for membrane air scouring) were applied in two stand-alone MBRs. After implementing these strategies, the yearly specific energy demand (SED) in flat-sheet (FS) and hollow-fibre (HF) stand-alone MBRs was reduced from 1.12 to 0.71 and from 1.54 to 1.12 kW h(-1) m(-3), respectively, regardless of their similar yearly averaged hydraulic loads. The strategies applied in the hybrid MBR, namely, buffering the influent flow and optimisation of both biological aeration and membrane air-scouring, reduced the SED values by 14%. These results illustrate that it is possible to apply energy-saving strategies to significantly reduce MBR operational costs, highlighting the need to optimise MBR facilities to reconsider them as an energy-competitive option. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Computational aero-acoustics for fan duct propagation and radiation. Current status and application to turbofan liner optimisation

    NASA Astrophysics Data System (ADS)

    Astley, R. J.; Sugimoto, R.; Mustafi, P.

    2011-08-01

    Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.

  19. Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line

    NASA Astrophysics Data System (ADS)

    Timings, Julian P.; Cole, David J.

    2012-06-01

    A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.

  20. Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology

    NASA Astrophysics Data System (ADS)

    Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong

    2017-10-01

    This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.

  1. Torque coordinating robust control of shifting process for dry dual clutch transmission equipped in a hybrid car

    NASA Astrophysics Data System (ADS)

    Zhao, Z.-G.; Chen, H.-J.; Yang, Y.-Y.; He, L.

    2015-09-01

    For a hybrid car equipped with dual clutch transmission (DCT), the coordination control problems of clutches and power sources are investigated while taking full advantage of the integrated starter generator motor's fast response speed and high accuracy (speed and torque). First, a dynamic model of the shifting process is established, the vehicle acceleration is quantified according to the intentions of the driver, and the torque transmitted by clutches is calculated based on the designed disengaging principle during the torque phase. Next, a robust H∞ controller is designed to ensure speed synchronisation despite the existence of model uncertainties, measurement noise, and engine torque lag. The engine torque lag and measurement noise are used as external disturbances to initially modify the output torque of the power source. Additionally, during the torque switch phase, the torque of the power sources is smoothly transitioned to the driver's demanded torque. Finally, the torque of the power sources is further distributed based on the optimisation of system efficiency, and the throttle opening of the engine is constrained to avoid sharp torque variations. The simulation results verify that the proposed control strategies effectively address the problem of coordinating control of clutches and power sources, establishing a foundation for the application of DCT in hybrid cars.

  2. Lévy flight artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She

    2016-08-01

    Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.

  3. Ant colony optimisation inversion of surface and borehole magnetic data under lithological constraints

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou; Xi, Yufei; Cai, Jianchao; Zhang, Henglei

    2015-01-01

    The ant colony optimisation algorithm has successfully been used to invert for surface magnetic data. However, the resolution of the distributions of the recovered physical property for deeply buried magnetic sources is not generally very high because of geophysical ambiguities. We use three approaches to deal with this problem. First, the observed surface magnetic data are taken together with the three-component borehole magnetic anomalies to recover the distributions of the physical properties. This cooperative inversion strategy improves the resolution of the inversion results in the vertical direction. Additionally, as the ant colony tours the discrete nodes, we force it to visit the nodes with physical properties that agree with the drilled lithologies. These lithological constraints reduce the non-uniqueness of the inversion problem. Finally, we also implement a K-means cluster analysis for the distributions of the magnetic cells after each iteration, in order to separate the distributions of magnetisation intensity instead of concentrating the distribution in a single area. We tested our method using synthetic data and found that all tests returned favourable results. In the case study of the Mengku iron-ore deposit in northwest China, the recovered distributions of magnetisation are in good agreement with the locations and shapes of the magnetite orebodies as inferred by drillholes. Uncertainty analysis shows that the ant colony algorithm is robust in the presence of noise and that the proposed approaches significantly improve the quality of the inversion results.

  4. Mammalian cell culture monitoring using in situ spectroscopy: Is your method really optimised?

    PubMed

    André, Silvère; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Duponchel, Ludovic

    2017-03-01

    In recent years, as a result of the process analytical technology initiative of the US Food and Drug Administration, many different works have been carried out on direct and in situ monitoring of critical parameters for mammalian cell cultures by Raman spectroscopy and multivariate regression techniques. However, despite interesting results, it cannot be said that the proposed monitoring strategies, which will reduce errors of the regression models and thus confidence limits of the predictions, are really optimized. Hence, the aim of this article is to optimize some critical steps of spectroscopic acquisition and data treatment in order to reach a higher level of accuracy and robustness of bioprocess monitoring. In this way, we propose first an original strategy to assess the most suited Raman acquisition time for the processes involved. In a second part, we demonstrate the importance of the interbatch variability on the accuracy of the predictive models with a particular focus on the optical probes adjustment. Finally, we propose a methodology for the optimization of the spectral variables selection in order to decrease prediction errors of multivariate regressions. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:308-316, 2017. © 2017 American Institute of Chemical Engineers.

  5. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis

    PubMed Central

    Waterfall, Christy M.; Cobb, Benjamin D.

    2001-01-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702

  6. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.

    PubMed

    Waterfall, C M; Cobb, B D

    2001-12-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.

  7. Potential of biogenic hydrogen production for hydrogen driven remediation strategies in marine environments.

    PubMed

    Hosseinkhani, Baharak; Hennebel, Tom; Boon, Nico

    2014-09-25

    Fermentative production of bio-hydrogen (bio-H2) from organic residues has emerged as a promising alternative for providing the required electron source for hydrogen driven remediation strategies. Unlike the widely used production of H2 by bacteria in fresh water systems, few reports are available regarding the generation of biogenic H2 and optimisation processes in marine systems. The present research aims to optimise the capability of an indigenous marine bacterium for the production of bio-H2 in marine environments and subsequently develop this process for hydrogen driven remediation strategies. Fermentative conversion of organics in marine media to H2 using a marine isolate, Pseudoalteromonas sp. BH11, was determined. A Taguchi design of experimental methodology was employed to evaluate the optimal nutritional composition in batch tests to improve bio-H2 yields. Further optimisation experiments showed that alginate-immobilised bacterial cells were able to produce bio-H2 at the same rate as suspended cells over a period of several weeks. Finally, bio-H2 was used as electron donor to successfully dehalogenate trichloroethylene (TCE) using biogenic palladium nanoparticles as a catalyst. Fermentative production of bio-H2 can be a promising technique for concomitant generation of an electron source for hydrogen driven remediation strategies and treatment of organic residue in marine ecosystems. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation.

    PubMed

    Zografos, K; Pimenta, F; Alves, M A; Oliveira, M S N

    2016-07-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field.

  9. Strategic optimisation of microgrid by evolving a unitised regenerative fuel cell system operational criterion

    NASA Astrophysics Data System (ADS)

    Bhansali, Gaurav; Singh, Bhanu Pratap; Kumar, Rajesh

    2016-09-01

    In this paper, the problem of microgrid optimisation with storage has been addressed in an unaccounted way rather than confining it to loss minimisation. Unitised regenerative fuel cell (URFC) systems have been studied and employed in microgrids to store energy and feed it back into the system when required. A value function-dependent on line losses, URFC system operational cost and stored energy at the end of the day are defined here. The function is highly complex, nonlinear and multi dimensional in nature. Therefore, heuristic optimisation techniques in combination with load flow analysis are used here to resolve the network and time domain complexity related with the problem. Particle swarm optimisation with the forward/backward sweep algorithm ensures optimal operation of microgrid thereby minimising the operational cost of the microgrid. Results are shown and are found to be consistently improving with evolution of the solution strategy.

  10. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation

    PubMed Central

    Zografos, K.; Oliveira, M. S. N.

    2016-01-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523

  11. Development of a robust chromatographic method for the detection of chlorophenols in cork oak forest soils.

    PubMed

    McLellan, Iain; Hursthouse, Andrew; Morrison, Calum; Varela, Adélia; Pereira, Cristina Silva

    2014-02-01

    A major concern for the cork and wine industry is 'cork taint' which is associated with chloroanisoles, the microbial degradation metabolites of chlorophenols. The use of chlorophenolic compounds as pesticides within cork forests was prohibited in 1993 in the European Union (EU) following the introduction of industry guidance. However, cork produced outside the EU is still thought to be affected and simple, robust methods for chlorophenol analysis are required for wider environmental assessment by industry and local environmental regulators. Soil samples were collected from three common-use forests in Tunisia and from one privately owned forest in Sardinia, providing examples of varied management practice and degree of human intervention. These provided challenge samples for the optimisation of a HPLC-UV detection method. It produced recoveries consistently >75% against a soil CRM (ERM-CC008) for pentachlorophenol. The optimised method, with ultraviolet (diode array) detection is able to separate and quantify 16 different chlorophenols at field concentrations greater than the limits of detection ranging from 6.5 to 191.3 μg/kg (dry weight). Application to a range of field samples demonstrated the absence of widespread contamination in forest soils at sites sampled in Sardinia and Tunisia.

  12. Cost effectiveness of surveillance for GI cancers.

    PubMed

    Omidvari, Amir-Houshang; Meester, Reinier G S; Lansdorp-Vogelaar, Iris

    2016-12-01

    Gastrointestinal (GI) diseases are among the leading causes of death in the world. To reduce the burden of GI diseases, surveillance is recommended for some diseases, including for patients with inflammatory bowel diseases, Barrett's oesophagus, precancerous gastric lesions, colorectal adenoma, and pancreatic neoplasms. This review aims to provide an overview of the evidence on cost-effectiveness of surveillance of individuals with GI conditions predisposing them to cancer, specifically focussing on the aforementioned conditions. We searched the literature and reviewed 21 studies. Despite heterogeneity of studies in terms of settings, study populations, surveillance strategies and outcomes, most reviewed studies suggested at least some surveillance of patients with these GI conditions to be cost-effective. For some high-risk conditions frequent surveillance with 3-month intervals was warranted, while for other conditions, surveillance may only be cost-effective every 10 years. Further studies based on more robust effectiveness evidence are needed to inform and optimise surveillance programmes for GI cancers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Alternative Zoning Scenarios for Regional Sustainable Land Use Controls in China: A Knowledge-Based Multiobjective Optimisation Model

    PubMed Central

    Xia, Yin; Liu, Dianfeng; Liu, Yaolin; He, Jianhua; Hong, Xiaofeng

    2014-01-01

    Alternative land use zoning scenarios provide guidance for sustainable land use controls. This study focused on an ecologically vulnerable catchment on the Loess Plateau in China, proposed a novel land use zoning model, and generated alternative zoning solutions to satisfy the various requirements of land use stakeholders and managers. This model combined multiple zoning objectives, i.e., maximum zoning suitability, maximum planning compatibility and maximum spatial compactness, with land use constraints by using goal programming technique, and employed a modified simulated annealing algorithm to search for the optimal zoning solutions. The land use zoning knowledge was incorporated into the initialisation operator and neighbourhood selection strategy of the simulated annealing algorithm to improve its efficiency. The case study indicates that the model is both effective and robust. Five optimal zoning scenarios of the study area were helpful for satisfying the requirements of land use controls in loess hilly regions, e.g., land use intensification, agricultural protection and environmental conservation. PMID:25170679

  14. Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.

    PubMed

    Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L

    2018-01-01

    Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1

  15. Optimised cross-layer synchronisation schemes for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Nasri, Nejah; Ben Fradj, Awatef; Kachouri, Abdennaceur

    2017-07-01

    This paper aims at synchronisation between the sensor nodes. Indeed, in the context of wireless sensor networks, it is necessary to take into consideration the energy cost induced by the synchronisation, which can represent the majority of the energy consumed. On communication, an already identified hard point consists in imagining a fine synchronisation protocol which must be sufficiently robust to the intermittent energy in the sensors. Hence, this paper worked on aspects of performance and energy saving, in particular on the optimisation of the synchronisation protocol using cross-layer design method such as synchronisation between layers. Our approach consists in balancing the energy consumption between the sensors and choosing the cluster head with the highest residual energy in order to guarantee the reliability, integrity and continuity of communication (i.e. maximising the network lifetime).

  16. Set-membership fault detection under noisy environment with application to the detection of abnormal aircraft control surface positions

    NASA Astrophysics Data System (ADS)

    El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali

    2015-09-01

    The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.

  17. Optimisation of follow-up after metabolic surgery.

    PubMed

    Mingrone, Geltrude; Bornstein, Stefan; Le Roux, Carel W

    2018-06-01

    Bariatric surgery has many benefits beyond weight loss, including improved control of glycaemia, blood pressure, and dyslipidaemia; hence, such surgery has been rebranded as metabolic surgery. The operations are, unfortunately, also associated with major surgical and medical complications. The medical complications include gastro-oesophageal reflux disease, malnutrition, and metabolic complications deriving from vitamin and mineral malabsorption. The benefits of surgery can be optimised by implementing specific protocols before and after surgery. In this Review, we discuss the assessment of the risk of major cardiac complications and severe obstructive sleep apnoea before surgery, and the provision of adequate lifelong postsurgery nutritional, vitamin, and mineral supplementation to reduce complications. Additionally, we examine the best antidiabetic medications to reduce the risk of hypoglycaemia after gastric bypass and sleeve gastrectomy, and the strategies to improve weight loss or reduce weight regain. Although optimising clinical pathways is possible to maximise metabolic benefits and reduce the risks of complications and micronutrient deficiencies, evolution of these strategies can further improve the risk-to-benefit ratio of metabolic surgery. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    PubMed

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Optimisation of oxygen ion transport in materials for ceramic membrane devices.

    PubMed

    Kilner, J A

    2007-01-01

    Oxygen transport in ceramic oxide materials has received much attention over the past few decades. Much of this interest has stemmed from the desire to construct high temperature electrochemical devices for energy conversion, an example being the solid oxide fuel cell. In order to achieve high performance for these devices, insights are needed in how to achieve optimum performance from the functional components such as the electrolytes and electrodes. This includes the optimisation of oxygen transport through the crystal lattice of electrode and electrolyte materials and across the homogeneous (grain boundary) and heterogeneous interfaces that exist in real devices. Strategies are discussed for the optimisation of these quantities and current problems in the characterisation of interfacial transport are explored.

  20. Accelerating clinical development of HIV vaccine strategies: methodological challenges and considerations in constructing an optimised multi-arm phase I/II trial design.

    PubMed

    Richert, Laura; Doussau, Adélaïde; Lelièvre, Jean-Daniel; Arnold, Vincent; Rieux, Véronique; Bouakane, Amel; Lévy, Yves; Chêne, Geneviève; Thiébaut, Rodolphe

    2014-02-26

    Many candidate vaccine strategies against human immunodeficiency virus (HIV) infection are under study, but their clinical development is lengthy and iterative. To accelerate HIV vaccine development optimised trial designs are needed. We propose a randomised multi-arm phase I/II design for early stage development of several vaccine strategies, aiming at rapidly discarding those that are unsafe or non-immunogenic. We explored early stage designs to evaluate both the safety and the immunogenicity of four heterologous prime-boost HIV vaccine strategies in parallel. One of the vaccines used as a prime and boost in the different strategies (vaccine 1) has yet to be tested in humans, thus requiring a phase I safety evaluation. However, its toxicity risk is considered minimal based on data from similar vaccines. We newly adapted a randomised phase II trial by integrating an early safety decision rule, emulating that of a phase I study. We evaluated the operating characteristics of the proposed design in simulation studies with either a fixed-sample frequentist or a continuous Bayesian safety decision rule and projected timelines for the trial. We propose a randomised four-arm phase I/II design with two independent binary endpoints for safety and immunogenicity. Immunogenicity evaluation at trial end is based on a single-stage Fleming design per arm, comparing the observed proportion of responders in an immunogenicity screening assay to an unacceptably low proportion, without direct comparisons between arms. Randomisation limits heterogeneity in volunteer characteristics between arms. To avoid exposure of additional participants to an unsafe vaccine during the vaccine boost phase, an early safety decision rule is imposed on the arm starting with vaccine 1 injections. In simulations of the design with either decision rule, the risks of erroneous conclusions were controlled <15%. Flexibility in trial conduct is greater with the continuous Bayesian rule. A 12-month gain in timelines is expected by this optimised design. Other existing designs such as bivariate or seamless phase I/II designs did not offer a clear-cut alternative. By combining phase I and phase II evaluations in a multi-arm trial, the proposed optimised design allows for accelerating early stage clinical development of HIV vaccine strategies.

  1. Diversity and carbon storage across the tropical forest biome

    NASA Astrophysics Data System (ADS)

    Sullivan, Martin J. P.; Talbot, Joey; Lewis, Simon L.; Phillips, Oliver L.; Qie, Lan; Begne, Serge K.; Chave, Jerôme; Cuni-Sanchez, Aida; Hubau, Wannes; Lopez-Gonzalez, Gabriela; Miles, Lera; Monteagudo-Mendoza, Abel; Sonké, Bonaventure; Sunderland, Terry; Ter Steege, Hans; White, Lee J. T.; Affum-Baffoe, Kofi; Aiba, Shin-Ichiro; de Almeida, Everton Cristo; de Oliveira, Edmar Almeida; Alvarez-Loayza, Patricia; Dávila, Esteban Álvarez; Andrade, Ana; Aragão, Luiz E. O. C.; Ashton, Peter; Aymard C., Gerardo A.; Baker, Timothy R.; Balinga, Michael; Banin, Lindsay F.; Baraloto, Christopher; Bastin, Jean-Francois; Berry, Nicholas; Bogaert, Jan; Bonal, Damien; Bongers, Frans; Brienen, Roel; Camargo, José Luís C.; Cerón, Carlos; Moscoso, Victor Chama; Chezeaux, Eric; Clark, Connie J.; Pacheco, Álvaro Cogollo; Comiskey, James A.; Valverde, Fernando Cornejo; Coronado, Eurídice N. Honorio; Dargie, Greta; Davies, Stuart J.; de Canniere, Charles; Djuikouo K., Marie Noel; Doucet, Jean-Louis; Erwin, Terry L.; Espejo, Javier Silva; Ewango, Corneille E. N.; Fauset, Sophie; Feldpausch, Ted R.; Herrera, Rafael; Gilpin, Martin; Gloor, Emanuel; Hall, Jefferson S.; Harris, David J.; Hart, Terese B.; Kartawinata, Kuswata; Kho, Lip Khoon; Kitayama, Kanehiro; Laurance, Susan G. W.; Laurance, William F.; Leal, Miguel E.; Lovejoy, Thomas; Lovett, Jon C.; Lukasu, Faustin Mpanya; Makana, Jean-Remy; Malhi, Yadvinder; Maracahipes, Leandro; Marimon, Beatriz S.; Junior, Ben Hur Marimon; Marshall, Andrew R.; Morandi, Paulo S.; Mukendi, John Tshibamba; Mukinzi, Jaques; Nilus, Reuben; Vargas, Percy Núñez; Camacho, Nadir C. Pallqui; Pardo, Guido; Peña-Claros, Marielos; Pétronelli, Pascal; Pickavance, Georgia C.; Poulsen, Axel Dalberg; Poulsen, John R.; Primack, Richard B.; Priyadi, Hari; Quesada, Carlos A.; Reitsma, Jan; Réjou-Méchain, Maxime; Restrepo, Zorayda; Rutishauser, Ervan; Salim, Kamariah Abu; Salomão, Rafael P.; Samsoedin, Ismayadi; Sheil, Douglas; Sierra, Rodrigo; Silveira, Marcos; Slik, J. W. Ferry; Steel, Lisa; Taedoumg, Hermann; Tan, Sylvester; Terborgh, John W.; Thomas, Sean C.; Toledo, Marisol; Umunay, Peter M.; Gamarra, Luis Valenzuela; Vieira, Ima Célia Guimarães; Vos, Vincent A.; Wang, Ophelia; Willcock, Simon; Zemagho, Lise

    2017-01-01

    Tropical forests are global centres of biodiversity and carbon storage. Many tropical countries aspire to protect forest to fulfil biodiversity and climate mitigation policy targets, but the conservation strategies needed to achieve these two functions depend critically on the tropical forest tree diversity-carbon storage relationship. Assessing this relationship is challenging due to the scarcity of inventories where carbon stocks in aboveground biomass and species identifications have been simultaneously and robustly quantified. Here, we compile a unique pan-tropical dataset of 360 plots located in structurally intact old-growth closed-canopy forest, surveyed using standardised methods, allowing a multi-scale evaluation of diversity-carbon relationships in tropical forests. Diversity-carbon relationships among all plots at 1 ha scale across the tropics are absent, and within continents are either weak (Asia) or absent (Amazonia, Africa). A weak positive relationship is detectable within 1 ha plots, indicating that diversity effects in tropical forests may be scale dependent. The absence of clear diversity-carbon relationships at scales relevant to conservation planning means that carbon-centred conservation strategies will inevitably miss many high diversity ecosystems. As tropical forests can have any combination of tree diversity and carbon stocks both require explicit consideration when optimising policies to manage tropical carbon and biodiversity.

  2. Diversity and carbon storage across the tropical forest biome.

    PubMed

    Sullivan, Martin J P; Talbot, Joey; Lewis, Simon L; Phillips, Oliver L; Qie, Lan; Begne, Serge K; Chave, Jerôme; Cuni-Sanchez, Aida; Hubau, Wannes; Lopez-Gonzalez, Gabriela; Miles, Lera; Monteagudo-Mendoza, Abel; Sonké, Bonaventure; Sunderland, Terry; Ter Steege, Hans; White, Lee J T; Affum-Baffoe, Kofi; Aiba, Shin-Ichiro; de Almeida, Everton Cristo; de Oliveira, Edmar Almeida; Alvarez-Loayza, Patricia; Dávila, Esteban Álvarez; Andrade, Ana; Aragão, Luiz E O C; Ashton, Peter; Aymard C, Gerardo A; Baker, Timothy R; Balinga, Michael; Banin, Lindsay F; Baraloto, Christopher; Bastin, Jean-Francois; Berry, Nicholas; Bogaert, Jan; Bonal, Damien; Bongers, Frans; Brienen, Roel; Camargo, José Luís C; Cerón, Carlos; Moscoso, Victor Chama; Chezeaux, Eric; Clark, Connie J; Pacheco, Álvaro Cogollo; Comiskey, James A; Valverde, Fernando Cornejo; Coronado, Eurídice N Honorio; Dargie, Greta; Davies, Stuart J; De Canniere, Charles; Djuikouo K, Marie Noel; Doucet, Jean-Louis; Erwin, Terry L; Espejo, Javier Silva; Ewango, Corneille E N; Fauset, Sophie; Feldpausch, Ted R; Herrera, Rafael; Gilpin, Martin; Gloor, Emanuel; Hall, Jefferson S; Harris, David J; Hart, Terese B; Kartawinata, Kuswata; Kho, Lip Khoon; Kitayama, Kanehiro; Laurance, Susan G W; Laurance, William F; Leal, Miguel E; Lovejoy, Thomas; Lovett, Jon C; Lukasu, Faustin Mpanya; Makana, Jean-Remy; Malhi, Yadvinder; Maracahipes, Leandro; Marimon, Beatriz S; Junior, Ben Hur Marimon; Marshall, Andrew R; Morandi, Paulo S; Mukendi, John Tshibamba; Mukinzi, Jaques; Nilus, Reuben; Vargas, Percy Núñez; Camacho, Nadir C Pallqui; Pardo, Guido; Peña-Claros, Marielos; Pétronelli, Pascal; Pickavance, Georgia C; Poulsen, Axel Dalberg; Poulsen, John R; Primack, Richard B; Priyadi, Hari; Quesada, Carlos A; Reitsma, Jan; Réjou-Méchain, Maxime; Restrepo, Zorayda; Rutishauser, Ervan; Salim, Kamariah Abu; Salomão, Rafael P; Samsoedin, Ismayadi; Sheil, Douglas; Sierra, Rodrigo; Silveira, Marcos; Slik, J W Ferry; Steel, Lisa; Taedoumg, Hermann; Tan, Sylvester; Terborgh, John W; Thomas, Sean C; Toledo, Marisol; Umunay, Peter M; Gamarra, Luis Valenzuela; Vieira, Ima Célia Guimarães; Vos, Vincent A; Wang, Ophelia; Willcock, Simon; Zemagho, Lise

    2017-01-17

    Tropical forests are global centres of biodiversity and carbon storage. Many tropical countries aspire to protect forest to fulfil biodiversity and climate mitigation policy targets, but the conservation strategies needed to achieve these two functions depend critically on the tropical forest tree diversity-carbon storage relationship. Assessing this relationship is challenging due to the scarcity of inventories where carbon stocks in aboveground biomass and species identifications have been simultaneously and robustly quantified. Here, we compile a unique pan-tropical dataset of 360 plots located in structurally intact old-growth closed-canopy forest, surveyed using standardised methods, allowing a multi-scale evaluation of diversity-carbon relationships in tropical forests. Diversity-carbon relationships among all plots at 1 ha scale across the tropics are absent, and within continents are either weak (Asia) or absent (Amazonia, Africa). A weak positive relationship is detectable within 1 ha plots, indicating that diversity effects in tropical forests may be scale dependent. The absence of clear diversity-carbon relationships at scales relevant to conservation planning means that carbon-centred conservation strategies will inevitably miss many high diversity ecosystems. As tropical forests can have any combination of tree diversity and carbon stocks both require explicit consideration when optimising policies to manage tropical carbon and biodiversity.

  3. Diversity and carbon storage across the tropical forest biome

    PubMed Central

    Sullivan, Martin J. P.; Talbot, Joey; Lewis, Simon L.; Phillips, Oliver L.; Qie, Lan; Begne, Serge K.; Chave, Jerôme; Cuni-Sanchez, Aida; Hubau, Wannes; Lopez-Gonzalez, Gabriela; Miles, Lera; Monteagudo-Mendoza, Abel; Sonké, Bonaventure; Sunderland, Terry; ter Steege, Hans; White, Lee J. T.; Affum-Baffoe, Kofi; Aiba, Shin-ichiro; de Almeida, Everton Cristo; de Oliveira, Edmar Almeida; Alvarez-Loayza, Patricia; Dávila, Esteban Álvarez; Andrade, Ana; Aragão, Luiz E. O. C.; Ashton, Peter; Aymard C., Gerardo A.; Baker, Timothy R.; Balinga, Michael; Banin, Lindsay F.; Baraloto, Christopher; Bastin, Jean-Francois; Berry, Nicholas; Bogaert, Jan; Bonal, Damien; Bongers, Frans; Brienen, Roel; Camargo, José Luís C.; Cerón, Carlos; Moscoso, Victor Chama; Chezeaux, Eric; Clark, Connie J.; Pacheco, Álvaro Cogollo; Comiskey, James A.; Valverde, Fernando Cornejo; Coronado, Eurídice N. Honorio; Dargie, Greta; Davies, Stuart J.; De Canniere, Charles; Djuikouo K., Marie Noel; Doucet, Jean-Louis; Erwin, Terry L.; Espejo, Javier Silva; Ewango, Corneille E. N.; Fauset, Sophie; Feldpausch, Ted R.; Herrera, Rafael; Gilpin, Martin; Gloor, Emanuel; Hall, Jefferson S.; Harris, David J.; Hart, Terese B.; Kartawinata, Kuswata; Kho, Lip Khoon; Kitayama, Kanehiro; Laurance, Susan G. W.; Laurance, William F.; Leal, Miguel E.; Lovejoy, Thomas; Lovett, Jon C.; Lukasu, Faustin Mpanya; Makana, Jean-Remy; Malhi, Yadvinder; Maracahipes, Leandro; Marimon, Beatriz S.; Junior, Ben Hur Marimon; Marshall, Andrew R.; Morandi, Paulo S.; Mukendi, John Tshibamba; Mukinzi, Jaques; Nilus, Reuben; Vargas, Percy Núñez; Camacho, Nadir C. Pallqui; Pardo, Guido; Peña-Claros, Marielos; Pétronelli, Pascal; Pickavance, Georgia C.; Poulsen, Axel Dalberg; Poulsen, John R.; Primack, Richard B.; Priyadi, Hari; Quesada, Carlos A.; Reitsma, Jan; Réjou-Méchain, Maxime; Restrepo, Zorayda; Rutishauser, Ervan; Salim, Kamariah Abu; Salomão, Rafael P.; Samsoedin, Ismayadi; Sheil, Douglas; Sierra, Rodrigo; Silveira, Marcos; Slik, J. W. Ferry; Steel, Lisa; Taedoumg, Hermann; Tan, Sylvester; Terborgh, John W.; Thomas, Sean C.; Toledo, Marisol; Umunay, Peter M.; Gamarra, Luis Valenzuela; Vieira, Ima Célia Guimarães; Vos, Vincent A.; Wang, Ophelia; Willcock, Simon; Zemagho, Lise

    2017-01-01

    Tropical forests are global centres of biodiversity and carbon storage. Many tropical countries aspire to protect forest to fulfil biodiversity and climate mitigation policy targets, but the conservation strategies needed to achieve these two functions depend critically on the tropical forest tree diversity-carbon storage relationship. Assessing this relationship is challenging due to the scarcity of inventories where carbon stocks in aboveground biomass and species identifications have been simultaneously and robustly quantified. Here, we compile a unique pan-tropical dataset of 360 plots located in structurally intact old-growth closed-canopy forest, surveyed using standardised methods, allowing a multi-scale evaluation of diversity-carbon relationships in tropical forests. Diversity-carbon relationships among all plots at 1 ha scale across the tropics are absent, and within continents are either weak (Asia) or absent (Amazonia, Africa). A weak positive relationship is detectable within 1 ha plots, indicating that diversity effects in tropical forests may be scale dependent. The absence of clear diversity-carbon relationships at scales relevant to conservation planning means that carbon-centred conservation strategies will inevitably miss many high diversity ecosystems. As tropical forests can have any combination of tree diversity and carbon stocks both require explicit consideration when optimising policies to manage tropical carbon and biodiversity. PMID:28094794

  4. Thermal energy and economic analysis of a PCM-enhanced household envelope considering different climate zones in Morocco

    NASA Astrophysics Data System (ADS)

    Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine

    2018-07-01

    This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.

  5. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  6. A generic methodology for the optimisation of sewer systems using stochastic programming and self-optimizing control.

    PubMed

    Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan

    2015-05-15

    The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  8. A brief understanding of process optimisation in microwave-assisted extraction of botanical materials: options and opportunities with chemometric tools.

    PubMed

    Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C

    2014-01-01

    Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.

  9. A radiobiology-based inverse treatment planning method for optimisation of permanent l-125 prostate implants in focal brachytherapy.

    PubMed

    Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A

    2016-01-07

    Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.

  10. Conventional laboratory methods for cyanotoxins.

    PubMed

    Lawton, Linda A; Edwards, C

    2008-01-01

    It is clear from the literature that numerous methods are available for most cyanotoxins, although many publications on monitoring data indicate that the favored approach is the use proven, robust methods for individual toxins. The most effective approach is the utilization of a robust rapid screen, where positive samples are followed up by qualitative and quantitative analysis to provide the essential decision making data needed for successful management strategies (Fig. 2). Currently, rapid screens are available for microcystins, saxitoxins and anatoxin-a(s), whilst optimisation and validation is needed, many publications report good correlation with the mouse bioassay and HPLC. There is an urgent need for rapid, simple, and inexpensive assays for cylindrospermopsins, anatoxin-a and BMAA. Although methods exist for analysis of BMAA, the fact that a recent study showed 95% of cyanobacteria producing this, some at levels > 6,000 microg g(-1) dry wt, is of concern and rapid screening followed by robust analysis needed. An ideal approach would be a single method capable of extracting and detecting all cyanotoxins. Several publications describe such approaches using LC-MS, but as expected from a group of compounds with diverse chemistry, there are obvious limitations in recoveries during sample processing, chromatographic performance and sensitivity (Dahlmann et al. 2003, Dell'Aversano et al. 2004, Pietsch et al. 2001). Selection of methods must be based on the application requirements, equipment available and cost. For many organisations it may be more cost effective to out-source the occasional analysis. However, as the incidence of blooms appears to be increasing, the need for more rigorous monitoring is needed, sensible investment is needed to meet recommended guidelines. Most of the methods discussed in this paper are suitable for achieving this goal, although clean-up and concentration is usually necessary for physicochemical methods.

  11. The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.

    2012-04-01

    For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.

  12. Statistical optimisation techniques in fatigue signal editing problem

    NASA Astrophysics Data System (ADS)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-02-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  13. Statistical optimisation techniques in fatigue signal editing problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less

  14. The optimisation, design and verification of feed horn structures for future Cosmic Microwave Background missions

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten

    2016-05-01

    In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.

  15. Global Corporate Priorities and Demand-Led Learning Strategies

    ERIC Educational Resources Information Center

    Dealtry, Richard

    2008-01-01

    Purpose: The purpose of this article is to start the process of exploring how to optimise connections between the strategic needs of an organisation as directed by top management and its learning management structures and strategies. Design/methodology/approach: The article takes a broad brush approach to a complex and large subject area that is…

  16. Gait as solution, but what is the problem? Exploring cost, economy and compromise in locomotion.

    PubMed

    Bertram, John E A

    2013-12-01

    Many studies have examined how legged mammals move, defining 'what' happens in locomotion. However, few ask 'why' those motions occur as they do. The energetic and functional constraints acting on an animal require that locomotion should be metabolically 'cost effective' and this in large part determines the strategies available to accomplish the task. Understanding the gaits utilised, within the spectrum of gaits possible, and determination of the value of specific relationships among speed, stride length, stride frequency and morphology, depends on identifying the fundamental costs involved and the effects of different movement strategies on those costs. It is argued here that a fundamental loss associated with moving on limbs (centre of mass momentum and energy loss) and two costs involved with controlling and replacing that loss (muscular work of the supporting limb during stance and muscular work of repositioning the limbs during swing) interact to determine the cost trade-offs involved and the optimisation strategies available for each species and speed. These optimisation strategies are what has been observed and characterised as gait. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Conception de lois de pilotage robustes et sequencement de gains par l'approche de systemes lineaires a parametres variants

    NASA Astrophysics Data System (ADS)

    Hentabli, Kamel

    Cette recherche s'inscrit dans le cadre du projet de recherche Active Control Technology entre l'Ecole de Technologie Superieure et le constructeur Bombardier Aeronautique . Le but est de concevoir des strategies de commandes multivariables et robustes pour des modeles dynamiques d'avions. Ces strategies de commandes devraient assurer a l'avion une haute performance et satisfaire des qualites de vol desirees en l'occurrence, une bonne manoeuvrabilite, de bonnes marges de stabilite et un amortissement des mouvements phugoides et rapides de l'avion. Dans un premier temps, nous nous sommes principalement interesses aux methodes de synthese LTI et plus exactement a l'approche Hinfinity et la mu-synthese. Par la suite, nous avons accorde un interet particulier aux techniques de commande LPV. Pour mener a bien ce travail, nous avons envisage une approche frequentielle, typiquement Hinfinity. Cette approche est particulierement interessante, dans la mesure ou le modele de synthese est construit directement a partir des differentes specifications du cahier des charges. En effet, ces specifications sont traduites sous forme de gabarits frequentiels, correspondant a des ponderations en entree et en sortie que l'on retrouve dans la synthese Hinfinity classique. Par ailleurs, nous avons utilise une representation de type lineaire fractionnelle (LFT), jugee mieux adaptee pour la prise en compte des differents types d'incertitudes, qui peuvent intervenir sur le systeme. De plus, cette representation s'avere tres appropriee pour l'analyse de la robustesse via les outils de la mu-analyse. D'autre part, afin d'optimiser le compromis entre les specifications de robustesse et de performance, nous avons opte pour une structure de commande a 2 degres de liberte avec modele de reference. Enfin, ces techniques sont illustrees sur des applications realistes, demontrant ainsi la pertinence et l'applicabilite de chacune d'elle. Mots cles. Commande de vol, qualites de vol et manoeuvrabilite, commande robuste, approche Hinfinity , mu-synthese, systemes lineaires a parametres variants, sequencement de gains, transformation lineaire fractionnelle, inegalite matricielle lineaire.

  18. Retrieval medicine: a review and guide for UK practitioners. Part 2: safety in patient retrieval systems

    PubMed Central

    Hearns, S; Shirley, P J

    2006-01-01

    Retrieval and transfer of critically ill and injured patients is a high risk activity. Risk can be minimised with robust safety and clinical governance systems in place. This article describes the various governance systems that can be employed to optimise safety and efficiency in retrieval services. These include operating procedure development, equipment management, communications procedures, crew resource management, significant event analysis, audit and training. PMID:17130608

  19. Optimisation of assembly scheduling in VCIM systems using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Dao, Son Duy; Abhary, Kazem; Marian, Romeo

    2017-09-01

    Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.

  20. Towards optimal experimental tests on the reality of the quantum state

    NASA Astrophysics Data System (ADS)

    Knee, George C.

    2017-02-01

    The Barrett-Cavalcanti-Lal-Maroney (BCLM) argument stands as the most effective means of demonstrating the reality of the quantum state. Its advantages include being derived from very few assumptions, and a robustness to experimental error. Finding the best way to implement the argument experimentally is an open problem, however, and involves cleverly choosing sets of states and measurements. I show that techniques from convex optimisation theory can be leveraged to numerically search for these sets, which then form a recipe for experiments that allow for the strongest statements about the ontology of the wavefunction to be made. The optimisation approach presented is versatile, efficient and can take account of the finite errors present in any real experiment. I find significantly improved low-cardinality sets which are guaranteed partially optimal for a BCLM test in low Hilbert space dimension. I further show that mixed states can be more optimal than pure states.

  1. More of the same? Comment on "An integrated framework for the optimisation of sport and athlete development: a practitioner approach".

    PubMed

    MacNamara, Aine; Collins, Dave

    2014-01-01

    Gulbin and colleagues (Gulbin, J. P., Croser, M. J., Morley, E. J., & Weissensteiner, J. R. (2013). An integrated framework for the optimisation of sport and athlete development: A practitioner approach. Journal of Sports Sciences) present a new sport and athlete development framework that evolved from empirical observations from working with the Australian Institute of Sport. The FTEM (Foundations, Talent, Elite, Mastery) framework is proposed to integrate general and specialised phases of development for participants within the active lifestyle, sport participation and sport excellence pathways. A number of issues concerning the FTEM framework are presented. We also propose the need to move beyond prescriptive models of talent identification and development towards a consideration of features of best practice and process markers of development together with robust guidelines about the implementation of these in applied practice.

  2. Hardware Design of the Energy Efficient Fall Detection Device

    NASA Astrophysics Data System (ADS)

    Skorodumovs, A.; Avots, E.; Hofmanis, J.; Korāts, G.

    2016-04-01

    Health issues for elderly people may lead to different injuries obtained during simple activities of daily living. Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. In the project "Wireless Sensor Systems in Telecare Application for Elderly People", we have developed a robust fall detection algorithm for a wearable wireless sensor. To optimise the algorithm for hardware performance and test it in field, we have designed an accelerometer based wireless fall detector. Our main considerations were: a) functionality - so that the algorithm can be applied to the chosen hardware, and b) power efficiency - so that it can run for a very long time. We have picked and tested the parts, built a prototype, optimised the firmware for lowest consumption, tested the performance and measured the consumption parameters. In this paper, we discuss our design choices and present the results of our work.

  3. Developing a spinal cord injury research strategy using a structured process of evidence review and stakeholder dialogue. Part III: outcomes.

    PubMed

    Middleton, J W; Piccenna, L; Lindsay Gruen, R; Williams, S; Creasey, G; Dunlop, S; Brown, D; Batchelor, P E; Berlowitz, D J; Coates, S; Dunn, J A; Furness, J B; Galea, M P; Geraghty, T; Kwon, B K; Urquhart, S; Yates, D; Bragge, P

    2015-10-01

    Focus Group. To develop a unified, regional spinal cord injury (SCI) research strategy for Australia and New Zealand. Australia. A 1-day structured stakeholder dialogue was convened in 2013 in Melbourne, Australia, by the National Trauma Research Institute in collaboration with the SCI Network of Australia and New Zealand. Twenty-three experts participated, representing local and international research, clinical, consumer, advocacy, government policy and funding perspectives. Preparatory work synthesised evidence and articulated draft principles and options as a starting point for discussion. A regional SCI research strategy was proposed, whose objectives can be summarised under four themes. (1) Collaborative networks and strategic partnerships to increase efficiency, reduce duplication, build capacity and optimise research funding. (2) Research priority setting and coordination to manage competing studies. (3) Mechanisms for greater consumer engagement in research. (4) Resources and infrastructure to further develop SCI data registries, evaluate research translation and assess alignment of research strategy with stakeholder interests. These are consistent with contemporary international SCI research strategy development activities. This first step in a regional SCI research strategy has articulated objectives for further development by the wider SCI research community. The initiative has also reinforced the importance of coordinated, collective action in optimising outcomes following SCI.

  4. Non-fragile observer-based output feedback control for polytopic uncertain system under distributed model predictive control approach

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun

    2017-07-01

    In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.

  5. Geographical traceability based on 87Sr/86Sr indicator: a first approach for PDO Lambrusco wines from Modena.

    PubMed

    Durante, Caterina; Baschieri, Carlo; Bertacchini, Lucia; Cocchi, Marina; Sighinolfi, Simona; Silvestri, Michele; Marchetti, Andrea

    2013-12-01

    The main goal of this study was to evaluate (87)Sr/(86)Sr ratio in different matrices, namely soils, branches, and grape juices, of an oenological food chain in order to develop a robust analytical strategy able to link the investigated food to its territory of origin. The (87)Sr/(86)Sr has been used as traceability marker and several aspects, affected its variability, i.e. geological features of the investigated area, the bio-available fraction of elements in the soils and the up-take of the plant, have been taken into account. Optimisation of an analytical procedure for the separation of Sr from its interferences and investigation of the analytical performances in terms of precision of used methodology have been carried out as well. This work highlighted a good match between the isotopic values monitored in the bio-available fraction of soils and their respective grape juices for almost all the investigated areas. The correlation with food satisfyingly improves when isotopic relative abundance values of branches vine are considered. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Easy-Going On-Spectrometer Optimisation of Phase Modulated Homonuclear Decoupling Sequences in Solid-State NMR

    NASA Astrophysics Data System (ADS)

    Grimminck, Dennis L. A. G.; Vasa, Suresh K.; Meerts, W. Leo; Kentgens, P. M.

    2011-06-01

    A global optimisation scheme for phase modulated proton homonuclear decoupling sequences in solid-state NMR is presented. Phase modulations, parameterised by DUMBO Fourier coefficients, were optimized using a Covariance Matrix Adaptation Evolution Strategies algorithm. Our method, denoted EASY-GOING homonuclear decoupling, starts with featureless spectra and optimises proton-proton decoupling, during either proton or carbon signal detection. On the one hand, our solutions closely resemble (e)DUMBO for moderate sample spinning frequencies and medium radio-frequency (rf) field strengths. On the other hand, the EASY-GOING approach resulted in a superior solution, achieving significantly better resolved proton spectra at very high 680 kHz rf field strength. N. Hansen, and A. Ostermeier. Evol. Comput. 9 (2001) 159-195 B. Elena, G. de Paepe, L. Emsley. Chem. Phys. Lett. 398 (2004) 532-538

  7. Biomimetic routes to nanoscale-toughened oxide ceramics

    NASA Astrophysics Data System (ADS)

    Deschaume, Olivier

    In this work, a novel anion exchange technique has been developed and optimised in order to prepare extra-pure, hydroxide-free solutions of aluminium polyoxocations (A113 and A130) as well as for the preparation of nanosized, highly monodisperse aluminium hydroxide particles in the particle size range 20-200nm. In order for the evolution and composition of the resulting systems to be monitored, an array of characterisation techniques including 27A1 NMR, dynamic light scattering, po-tentiometry, conductometry and UV-Vis spectroscopy, have been implemented and complemented with successful data treatment strategies. The quantitative data obtained indicates that the static anion exchange method is a soft, environmentally friendly, low-cost, energy-saving and convenient procedure for the preparation of Al- containing model systems. The A1 species obtained can be used for high-precision model studies on A1 speciation, and serve as nanosize precursors to a variety of Al-containing materials. The use of these pure A1 precursors has a clear advantage in materials synthesis arising from an improved understanding and better control of A1 speciation. In a second development of the project, the model systems have been used in a nanotectonic approach to biomimetic materials synthesis, with possible applications to the optimisation of Al-containing materials such as ceramics or composite films. Bearing this aim in mind, the interactions of the prepared aluminium species with the model protein BSA and a bioelastomer, elastin, were monitored and the resulting composite materials characterised. The methodology developed for the synthesis and characterisation of pure A1 species and A1 species/biomolecule systems is a robust base for further studies spanning research fields such as Chemistry, Biology or Environmental sciences, and possess a large potential for application to industrial products and processes.

  8. Joint optimisation of arbitrage profits and battery life degradation for grid storage application of battery electric vehicles

    NASA Astrophysics Data System (ADS)

    Kies, Alexander

    2018-02-01

    To meet European decarbonisation targets by 2050, the electrification of the transport sector is mandatory. Most electric vehicles rely on lithium-ion batteries, because they have a higher energy/power density and longer life span compared to other practical batteries such as zinc-carbon batteries. Electric vehicles can thus provide energy storage to support the system integration of generation from highly variable renewable sources, such as wind and photovoltaics (PV). However, charging/discharging causes batteries to degradate progressively with reduced capacity. In this study, we investigate the impact of the joint optimisation of arbitrage revenue and battery degradation of electric vehicle batteries in a simplified setting, where historical prices allow for market participation of battery electric vehicle owners. It is shown that the joint optimisation of both leads to stronger gains then the sum of both optimisation strategies and that including battery degradation into the model avoids state of charges close to the maximum at times. It can be concluded that degradation is an important aspect to consider in power system models, which incorporate any kind of lithium-ion battery storage.

  9. An integrated portfolio optimisation procedure based on data envelopment analysis, artificial bee colony algorithm and genetic programming

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Ming

    2014-12-01

    Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.

  10. Comparing Student Learning Experiences of In-Text Commentary and Rubric-Articulated Feedback: Strategies for Formative Assessment

    ERIC Educational Resources Information Center

    Nordrum, Lene; Evans, Katherine; Gustafsson, Magnus

    2013-01-01

    This study compares students' experiences of two types of criteria-based assessment: in-text commentary and rubric-articulated feedback, in an assessment design combining the two feedback channels. The main aim is to use students' responses to shed light on how feedback strategies for formative assessment can be optimised. Following action…

  11. Need for Optimisation of Immunisation Strategies Targeting Invasive Meningococcal Disease in the Netherlands.

    PubMed

    Bousema, Josefien Cornelie Minthe; Ruitenberg, Joost

    2015-09-13

    Invasive meningococcal disease (IMD) is a severe bacterial infectious disease with high mortality and morbidity rates worldwide. In recent years, industrialised countries have implemented vaccines targeting IMD in their National Immunisation Programmes (NIPs). In 2002, the Netherlands successfully implemented a single dose of meningococcal serogroup C conjugate vaccine at the age of 14 months and performed a single catch-up for children ≤18 years of age. Since then the disease disappeared in vaccinated individuals. Furthermore, herd protection was induced, leading to a significant IMD reduction in non-vaccinated individuals. However, previous studies revealed that the current programmatic immunisation strategy was insufficient to protect the population in the foreseeable future. In addition, vaccines that provide protection against additional serogroups are now available. This paper describes to what extent the current strategy to prevent IMD in the Netherlands is still sufficient, taking into account the burden of disease and the latest scientific knowledge related to IMD and its prevention. In particular, primary MenC immunisation seems not to provide long-term protection, indicating a risk for possible recurrence of the disease. This can be combatted by implementing a MenC or MenACWY adolescent booster vaccine. Additional health benefits can be achieved by replacing the primary MenC by a MenACWY vaccine. By implementation of a recently licensed MenB vaccine for infants in the NIP, the greatest burden of disease would be targeted. This paper shows that optimisation of the immunisation strategy targeting IMD in the Netherlands should be considered and contributes to create awareness concerning prevention optimisation in other countries. © 2015 by Kerman University of Medical Sciences.

  12. Robust optimisation-based microgrid scheduling with islanding constraints

    DOE PAGES

    Liu, Guodong; Starke, Michael; Xiao, Bailu; ...

    2017-02-17

    This paper proposes a robust optimization based optimal scheduling model for microgrid operation considering constraints of islanding capability. Our objective is to minimize the total operation cost, including generation cost and spinning reserve cost of local resources as well as purchasing cost of energy from the main grid. In order to ensure the resiliency of a microgrid and improve the reliability of the local electricity supply, the microgrid is required to maintain enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation when the supply of power from the main grid is interrupted suddenly,more » i.e., microgrid transitions from grid-connected into islanded mode. Prevailing operational uncertainties in renewable energy resources and load are considered and captured using a robust optimization method. With proper robust level, the solution of the proposed scheduling model ensures successful islanding of the microgrid with minimum load curtailment and guarantees robustness against all possible realizations of the modeled operational uncertainties. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling model.« less

  13. A fast stir bar sorptive extraction method for the analysis of geosmin and 2-methylisoborneol in source and drinking water.

    PubMed

    Bauld, T; Teasdale, P; Stratton, H; Uwins, H

    2007-01-01

    The presence of unpleasant taste and odour in drinking water is an ongoing aesthetic concern for water providers worldwide. The need for a sensitive and robust method capable of analysis in both natural and treated waters is essential for early detection of taste and odour events. The purpose of this study was to develop and optimise a fast stir bar sorptive extraction (SBSE) method for the analysis of geosmin and 2-methylisoborneol (MIB) in both natural water and drinking water. Limits of detection with the optimised fast method (45 min extraction time at 60 degrees C using 24 microL stir bars) were 1.1 ng/L for geosmin and 4.2 ng/L for MIB. Relative standard deviations at the detection limits were under 17% for both compounds. Use of multiple stir bars can be used to decrease the detection limits further. The use of 25% NaCl and 5% methanol sample modifiers decreased the experimental recoveries. Likewise, addition of 1 mg/L and 1.5 mg/L NaOCI decreased the recoveries and this effect was not reversed by addition of 10% thiosulphate. The optimised method was used to measure geosmin concentrations in treated and untreated drinking water. MIB concentrations were below the detection limits in these waters.

  14. Demonstrating the suitability of genetic algorithms for driving microbial ecosystems in desirable directions.

    PubMed

    Vandecasteele, Frederik P J; Hess, Thomas F; Crawford, Ronald L

    2007-07-01

    The functioning of natural microbial ecosystems is determined by biotic interactions, which are in turn influenced by abiotic environmental conditions. Direct experimental manipulation of such conditions can be used to purposefully drive ecosystems toward exhibiting desirable functions. When a set of environmental conditions can be manipulated to be present at a discrete number of levels, finding the right combination of conditions to obtain the optimal desired effect becomes a typical combinatorial optimisation problem. Genetic algorithms are a class of robust and flexible search and optimisation techniques from the field of computer science that may be very suitable for such a task. To verify this idea, datasets containing growth levels of the total microbial community of four different natural microbial ecosystems in response to all possible combinations of a set of five chemical supplements were obtained. Subsequently, the ability of a genetic algorithm to search this parameter space for combinations of supplements driving the microbial communities to high levels of growth was compared to that of a random search, a local search, and a hill-climbing algorithm, three intuitive alternative optimisation approaches. The results indicate that a genetic algorithm is very suitable for driving microbial ecosystems in desirable directions, which opens opportunities for both fundamental ecological research and industrial applications.

  15. Comparing approaches for using climate projections in assessing water resources investments for systems with multiple stakeholder groups

    NASA Astrophysics Data System (ADS)

    Hurford, Anthony; Harou, Julien

    2015-04-01

    Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.

  16. The bloody mess of red blood cell transfusion.

    PubMed

    Chandra, Susilo; Kulkarni, Hrishikesh; Westphal, Martin

    2017-12-28

    Red blood cell (RBC) transfusion might be life-saving in settings with acute blood loss, especially uncontrolled haemorrhagic shock. However, there appears to be a catch-22 situation reflected by the facts that preoperative anaemia represents an independent risk factor for postoperative morbidity and mortality, and that RBC transfusion might also contribute to adverse clinical outcomes. This dilemma is further complicated by the difficulty to define the "best" transfusion trigger and strategy. Since one size does obviously not fit all, a personalised approach is merited. Attempts should thus be made to critically reflect on the pros and cons of RBC transfusion in each individual patient. Patient blood management concepts including preoperative, intraoperative and postoperative optimisation strategies involving the intensive care unit are warranted and are likely to provide benefits for the patients and the healthcare system. In this context, it is important to consider that "simply" increasing the haemoglobin content, and in proportion oxygen delivery, may not necessarily contribute to a better outcome but potentially the contrary in the long term. The difficulty lies in identification of the patients who might eventually profit from RBC transfusion and to determine in whom a transfusion might be withheld without inducing harm. More robust clinical data providing long-term outcome data are needed to better understand in which patients RBC transfusion might be life-saving vs life-limiting.

  17. The Need for European Surveillance of CDI.

    PubMed

    Wiuff, Camilla; Banks, A-Lan; Fitzpatrick, Fidelma; Cottom, Laura

    2018-01-01

    Since the turn of the millennium, the epidemiology of Clostridium difficile infection (CDI) has continued to challenge. Over the last decade there has been a growing awareness that improvements to surveillance are needed. The increasing rate of CDI and emergence of ribotype 027 precipitated the implementation of mandatory national surveillance of CDI in the UK. Changes in clinical presentation, severity of disease, descriptions of new risk factors and the occurrence of outbreaks all emphasised the importance of early diagnosis and surveillance.However a lack of consensus on case definitions, clinical guidelines and optimal laboratory diagnostics across Europe has lead to the underestimation of CDI and impeded comparison between countries. These inconsistencies have prevented the true burden of disease from being appreciated.Acceptance that a multi-country surveillance programme and optimised diagnostic strategies are required not only to detect and control CDI in Europe, but for a better understanding of the epidemiology, has built the foundations for a more robust, unified surveillance. The concerted efforts of the European Centre for Disease Prevention and Control (ECDC) CDI networks, has lead to the development of an over-arching long-term CDI surveillance strategy for 2014-2020. Fulfilment of the ECDC priorities and targets will no doubt be challenging and will require significant investment however the hope is that both a national and Europe-wide picture of CDI will finally be realised.

  18. Gender differences in visuospatial planning: an eye movements study.

    PubMed

    Cazzato, Valentina; Basso, Demis; Cutini, Simone; Bisiacchi, Patrizia

    2010-01-20

    Gender studies report a male advantage in several visuospatial abilities. Only few studies however, have evaluated differences in visuospatial planning behaviour with regard to gender. This study was aimed at exploring whether gender may affect the choice of cognitive strategies in a visuospatial planning task and, if oculomotor measures could assist in disentangling the cognitive processes involved. A computerised task based on the travelling salesperson problem paradigm, the Maps test, was used to investigate these issues. Participants were required to optimise time and space of a path travelling among a set of sub-goals in a spatially constrained environment. Behavioural results suggest that there are no gender differences in the initial visual processing of the stimuli, but rather during the execution of the plan, with males showing a shorter execution time and a higher path length optimisation than females. Males often showed changes of heuristics during the execution while females seemed to prefer a constant strategy. Moreover, a better performance in behavioural and oculomotor measures seemed to suggest that males are more able than females in either the optimisation of spatial features or the realisation of the planned scheme. Despite inconclusive findings, the results support previous research and provide insight into the level of cognitive processing involved in navigation and planning tasks, with regard to the influence of gender.

  19. An improved PSO-SVM model for online recognition defects in eddy current testing

    NASA Astrophysics Data System (ADS)

    Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin

    2013-12-01

    Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.

  20. Phenotype heterogeneity in cancer cell populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almeida, Luis; Chisholm, Rebecca; Clairambault, Jean

    2016-06-08

    Phenotype heterogeneity in cancer cell populations, be it of genetic, epigenetic or stochastic origin, has been identified as a main source of resistance to drug treatments and a major source of therapeutic failures in cancers. The molecular mechanisms of drug resistance are partly understood at the single cell level (e.g., overexpression of ABC transporters or of detoxication enzymes), but poorly predictable in tumours, where they are hypothesised to rely on heterogeneity at the cell population scale, which is thus the right level to describe cancer growth and optimise its control by therapeutic strategies in the clinic. We review a fewmore » results from the biological literature on the subject, and from mathematical models that have been published to predict and control evolution towards drug resistance in cancer cell populations. We propose, based on the latter, optimisation strategies of combined treatments to limit emergence of drug resistance to cytotoxic drugs in cancer cell populations, in the monoclonal situation, which limited as it is still retains consistent features of cell population heterogeneity. The polyclonal situation, that may be understood as “bet hedging” of the tumour, thus protecting itself from different sources of drug insults, may lie beyond such strategies and will need further developments. In the monoclonal situation, we have designed an optimised therapeutic strategy relying on a scheduled combination of cytotoxic and cytostatic treatments that can be adapted to different situations of cancer treatments. Finally, we review arguments for biological theoretical frameworks proposed at different time and development scales, the so-called atavistic model (diachronic view relying on Darwinian genotype selection in the coursof billions of years) and the Waddington-like epigenetic landscape endowed with evolutionary quasi-potential (synchronic view relying on Lamarckian phenotype instruction of a given genome by reversible mechanisms), to represent evolution towards heterogeneity, possibly polyclonal, in cancer cell populations and propose innovative directions for therapeutic strategies based on such frameworks.« less

  1. Phenotype heterogeneity in cancer cell populations

    NASA Astrophysics Data System (ADS)

    Almeida, Luis; Chisholm, Rebecca; Clairambault, Jean; Escargueil, Alexandre; Lorenzi, Tommaso; Lorz, Alexander; Trélat, Emmanuel

    2016-06-01

    Phenotype heterogeneity in cancer cell populations, be it of genetic, epigenetic or stochastic origin, has been identified as a main source of resistance to drug treatments and a major source of therapeutic failures in cancers. The molecular mechanisms of drug resistance are partly understood at the single cell level (e.g., overexpression of ABC transporters or of detoxication enzymes), but poorly predictable in tumours, where they are hypothesised to rely on heterogeneity at the cell population scale, which is thus the right level to describe cancer growth and optimise its control by therapeutic strategies in the clinic. We review a few results from the biological literature on the subject, and from mathematical models that have been published to predict and control evolution towards drug resistance in cancer cell populations. We propose, based on the latter, optimisation strategies of combined treatments to limit emergence of drug resistance to cytotoxic drugs in cancer cell populations, in the monoclonal situation, which limited as it is still retains consistent features of cell population heterogeneity. The polyclonal situation, that may be understood as "bet hedging" of the tumour, thus protecting itself from different sources of drug insults, may lie beyond such strategies and will need further developments. In the monoclonal situation, we have designed an optimised therapeutic strategy relying on a scheduled combination of cytotoxic and cytostatic treatments that can be adapted to different situations of cancer treatments. Finally, we review arguments for biological theoretical frameworks proposed at different time and development scales, the so-called atavistic model (diachronic view relying on Darwinian genotype selection in the coursof billions of years) and the Waddington-like epigenetic landscape endowed with evolutionary quasi-potential (synchronic view relying on Lamarckian phenotype instruction of a given genome by reversible mechanisms), to represent evolution towards heterogeneity, possibly polyclonal, in cancer cell populations and propose innovative directions for therapeutic strategies based on such frameworks.

  2. [Inappropriate ICD therapies: All problems solved with MADIT-RIT?].

    PubMed

    Kolb, Christof

    2015-06-01

    The MADIT-RIT study represents a major trial in implantable cardioverter-defibrillator (ICD) therapy that was recently published. It highlights that different programming strategies (high rate cut-off or delayed therapy versus conventional) reduce inappropriate ICD therapies, leave syncope rates unaltered and can improve patient's survival. The study should motivate cardiologist and electrophysiologists to reconsider their individual programming strategies. However, as the study represents largely patients with ischemic or dilated cardiomyopathy for primary prevention of sudden cardiac death supplied with a dual chamber or cardiac resynchronisation therapy ICD, the results may not easily be transferable to other entities or other device types. Despite the success of the MADIT-RIT study efforts still need to be taken to further optimise device algorithms to avert inappropriate therapies. Optimised ICD therapy also includes the avoidance of unnecessary ICD shocks as well as the treatment of all aspects of the underlying cardiac disease.

  3. Incorporating palaeoclimate data into water security planning and decision making - a case study from southeast Queensland, Australia

    NASA Astrophysics Data System (ADS)

    Kiem, Anthony; Vance, Tessa; Tozer, Carly; Roberts, Jason

    2017-04-01

    Decision makers in the water sector need to deal with existing hydroclimatic variability and uncertainty about future changes to climate and catchment conditions. Identifying solutions for hydroclimatic risk adaptation strategies that are both optimal and robust in the presence of variability and uncertainty presents a difficult challenge. A major reason for this challenge is the fact that the instrumental record in Australia is short ( 60-130 years) and fails to encompass enough climate variability to allow the calculation of robust statistics around the baseline risk of extreme events (e.g. multi-year droughts, decadal periods with clustering of major flood events). This climate variability is documented pre-1900 in palaeoclimate records from sources such as corals, tree-rings, freshwater and marine sediments. Despite being remote from Queensland, a high resolution and highly correlated palaeoclimate record from the Law Dome ice cores in Antarctica (Vance et al. 2015) is also now available and has identified eight mega-droughts (lasting from 5-39 years) during 1000-2009 AD. Most importantly, the palaeoclimate information confirms that the post-1900 instrumental period (i.e. the period on which all water resources infrastructure, policy, operation rules and strategies is based) does not capture the full range of variability that has occurred. Other work also clearly shows that, out to 2050 at least, impacts associated with natural variability significantly exceed even the worst-case climate change scenarios (i.e. obtained from Global Climate Models run under the highest emission scenarios). This presentation will demonstrate how the Law Dome ice cores from Antarctica have been used to produce a highly accurate, 1000 year, annual and seasonal resolution, hydroclimate reconstruction (i.e. precipitation and streamflow) for the southeast Queensland region of Australia. We will then show how the palaeoclimate data has been incorporated into the South East Queensland Regional Stochastic Model (SEQRSM) of catchment hydrology to (a) demonstrate the utility of a palaeoclimate proxy approach in producing more robust estimates of hydroclimatic risk under climate variability and change; (b) gain improved insights into the characteristics (e.g. location, duration, frequency, magnitude, spatial extent, sequencing) of hydroclimate extremes for water security planning and (c) deliver optimised solutions for hydroclimatic risk adaptation strategies to water managers (e.g. optimal and sustainable supply of water to meet current and future urban requirements and also to nearby catchments to support irrigation for dairy, vegetable and forage crops).

  4. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  5. The rheology, microstructure and sensory characteristics of a gluten-free bread formulation enhanced with orange pomace.

    PubMed

    O'Shea, Norah; Doran, Linda; Auty, Mark; Arendt, Elke; Gallagher, Eimear

    2013-12-01

    The present manuscript studied a previously optimised gluten-free bread formulation containing 5.5% orange pomace (OP) in relation to the batter characteristics (i.e. pre-baking), microstructure (of the flours, batter and bread) and sensory characteristics of the bread. Rheology, RVA and mixolab results illustrated that orange pomace improved the robustness of the gluten-free batter and decreased the occurrence of starch gelatinisation. This was confirmed from the confocal laser scanning microscopy (CLSM) images, which showed potato starch granules to be more expanded in the control batter when compared to the sample containing orange pomace. Starch granules were also observed to be more enlarged and swollen in the CLSM bread images, suggesting a higher level of gelatinisation occurred in the control sample. Sensory analysis was carried out on the optimised and control bread; panellists scored the flavour, crumb appearance and overall acceptability of the OP-containing breads comparable to the control.

  6. Optimising Ambient Setting Bayer Derived Fly Ash Geopolymers

    PubMed Central

    Jamieson, Evan; Kealley, Catherine S.; van Riessen, Arie; Hart, Robert D.

    2016-01-01

    The Bayer process utilises high concentrations of caustic and elevated temperature to liberate alumina from bauxite, for the production of aluminium and other chemicals. Within Australia, this process results in 40 million tonnes of mineral residues (Red mud) each year. Over the same period, the energy production sector will produce 14 million tonnes of coal combustion products (Fly ash). Both industrial residues require impoundment storage, yet combining some of these components can produce geopolymers, an alternative to cement. Geopolymers derived from Bayer liquor and fly ash have been made successfully with a compressive strength in excess of 40 MPa after oven curing. However, any product from these industries would require large volume applications with robust operational conditions to maximise utilisation. To facilitate potential unconfined large-scale production, Bayer derived fly ash geopolymers have been optimised to achieve ambient curing. Fly ash from two different power stations have been successfully trialled showing the versatility of the Bayer liquor-ash combination for making geopolymers. PMID:28773513

  7. Optimising Ambient Setting Bayer Derived Fly Ash Geopolymers.

    PubMed

    Jamieson, Evan; Kealley, Catherine S; van Riessen, Arie; Hart, Robert D

    2016-05-19

    The Bayer process utilises high concentrations of caustic and elevated temperature to liberate alumina from bauxite, for the production of aluminium and other chemicals. Within Australia, this process results in 40 million tonnes of mineral residues (Red mud) each year. Over the same period, the energy production sector will produce 14 million tonnes of coal combustion products (Fly ash). Both industrial residues require impoundment storage, yet combining some of these components can produce geopolymers, an alternative to cement. Geopolymers derived from Bayer liquor and fly ash have been made successfully with a compressive strength in excess of 40 MPa after oven curing. However, any product from these industries would require large volume applications with robust operational conditions to maximise utilisation. To facilitate potential unconfined large-scale production, Bayer derived fly ash geopolymers have been optimised to achieve ambient curing. Fly ash from two different power stations have been successfully trialled showing the versatility of the Bayer liquor-ash combination for making geopolymers.

  8. Global optimisation methods for poroelastic material characterisation using a clamped sample in a Kundt tube setup

    NASA Astrophysics Data System (ADS)

    Vanhuyse, Johan; Deckers, Elke; Jonckheere, Stijn; Pluymers, Bert; Desmet, Wim

    2016-02-01

    The Biot theory is commonly used for the simulation of the vibro-acoustic behaviour of poroelastic materials. However, it relies on a number of material parameters. These can be hard to characterize and require dedicated measurement setups, yielding a time-consuming and costly characterisation. This paper presents a characterisation method which is able to identify all material parameters using only an impedance tube. The method relies on the assumption that the sample is clamped within the tube, that the shear wave is excited and that the acoustic field is no longer one-dimensional. This paper numerically shows the potential of the developed method. It therefore performs a sensitivity analysis of the quantification parameters, i.e. reflection coefficients and relative pressures, and a parameter estimation using global optimisation methods. A 3-step procedure is developed and validated. It is shown that even in the presence of numerically simulated noise this procedure leads to a robust parameter estimation.

  9. Catalysis with Gold Complexes Immobilised on Carbon Nanotubes by π-π Stacking Interactions: Heterogeneous Catalysis versus the Boomerang Effect.

    PubMed

    Vriamont, Charles; Devillers, Michel; Riant, Olivier; Hermans, Sophie

    2013-09-02

    A new pyrene-tagged gold(I) complex has been synthesised and tested as a homogeneous catalyst. First, a simple 1,6-enyne was chosen as a model substrate for cyclisation by using different solvents to optimise the reaction conditions. The non-covalent immobilisation of our pyrene-tagged gold complex onto multi-walled carbon nanotubes through π-π stacking interactions was then explored to obtain a supported homogeneous catalyst. The heterogenised catalyst and its homogeneous counterpart exhibited similar activity in a range of enyne cyclisation reactions. Bearing in mind that π-π interactions are affected by temperature and solvent polarity, the reuse and robustness of the supported homogeneous catalyst was tested to explore the scope and limitations of the recyclability of this catalyst. Under the optimised conditions, recyclability was observed by using the concept of the boomerang effect. Copyright © 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Optimisation of strain selection in evolutionary continuous culture

    NASA Astrophysics Data System (ADS)

    Bayen, T.; Mairet, F.

    2017-12-01

    In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.

  11. BIANCA (Brain Intensity AbNormality Classification Algorithm): A new tool for automated segmentation of white matter hyperintensities.

    PubMed

    Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark

    2016-11-01

    Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. A support vector machine for predicting defibrillation outcomes from waveform metrics.

    PubMed

    Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J

    2014-03-01

    Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Energetic Optimisation of Foraging Honeybees: Flexible Change of Strategies in Response to Environmental Challenges

    PubMed Central

    Stabentheiner, Anton; Kovac, Helmut

    2014-01-01

    Heterothermic insects like honeybees, foraging in a variable environment, face the challenge of keeping their body temperature high to enable immediate flight and to promote fast exploitation of resources. Because of their small size they have to cope with an enormous heat loss and, therefore, high costs of thermoregulation. This calls for energetic optimisation which may be achieved by different strategies. An ‘economizing’ strategy would be to reduce energetic investment whenever possible, for example by using external heat from the sun for thermoregulation. An ‘investment-guided’ strategy, by contrast, would be to invest additional heat production or external heat gain to optimize physiological parameters like body temperature which promise increased energetic returns. Here we show how honeybees balance these strategies in response to changes of their local microclimate. In a novel approach of simultaneous measurement of respiration and body temperature foragers displayed a flexible strategy of thermoregulatory and energetic management. While foraging in shade on an artificial flower they did not save energy with increasing ambient temperature as expected but acted according to an ‘investment-guided’ strategy, keeping the energy turnover at a high level (∼56–69 mW). This increased thorax temperature and speeded up foraging as ambient temperature increased. Solar heat was invested to increase thorax temperature at low ambient temperature (‘investment-guided’ strategy) but to save energy at high temperature (‘economizing’ strategy), leading to energy savings per stay of ∼18–76% in sunshine. This flexible economic strategy minimized costs of foraging, and optimized energetic efficiency in response to broad variation of environmental conditions. PMID:25162211

  14. A simplified protocol for molecular identification of Eimeria species in field samples.

    PubMed

    Haug, Anita; Thebo, Per; Mattsson, Jens G

    2007-05-15

    This study aimed to find a fast, sensitive and efficient protocol for molecular identification of chicken Eimeria spp. in field samples. Various methods for each of the three steps of the protocol were evaluated: oocyst wall rupturing methods, DNA extraction methods, and identification of species-specific DNA sequences by PCR. We then compared and evaluated five complete protocols. Three series of oocyst suspensions of known number of oocysts from Eimeria mitis, Eimeria praecox, Eimeria maxima and Eimeria tenella were prepared and ground using glass beads or mini-pestle. DNA was extracted from ruptured oocysts using commercial systems (GeneReleaser, Qiagen Stoolkit and Prepman) or phenol-chloroform DNA extraction, followed by identification of species-specific ITS-1 sequences by optimised single species PCR assays. The Stoolkit and Prepman protocols showed insufficient repeatability, and the former was also expensive and relatively time-consuming. In contrast, both the GeneReleaser protocol and phenol-chloroform protocols were robust and sensitive, detecting less than 0.4 oocysts of each species per PCR. Finally, we evaluated our new protocol on 68 coccidia positive field samples. Our data suggests that rupturing the oocysts by mini-pestle grinding, preparing the DNA with GeneReleaser, followed by optimised single species PCR assays, makes a robust and sensitive procedure for identifying chicken Eimeria species in field samples. Importantly, it also provides minimal hands-on-time in the pre-PCR process, lower contamination risk and no handling of toxic chemicals.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knight, Stephen P, E-mail: stephen.knight@health.qld.gov.au

    The aim of this review was to develop a radiographic optimisation strategy to make use of digital radiography (DR) and needle phosphor computerised radiography (CR) detectors, in order to lower radiation dose and improve image quality for paediatrics. This review was based on evidence-based practice, of which a component was a review of the relevant literature. The resulting exposure chart was developed with two distinct groups of exposure optimisation strategies – body exposures (for head, trunk, humerus, femur) and distal extremity exposures (elbow to finger, knee to toe). Exposure variables manipulated included kilovoltage peak (kVp), target detector exposure and milli-ampere-secondsmore » (mAs), automatic exposure control (AEC), additional beam filtration, and use of antiscatter grid. Mean dose area product (DAP) reductions of up to 83% for anterior–posterior (AP)/posterior–anterior (PA) abdomen projections were recorded postoptimisation due to manipulation of multiple-exposure variables. For body exposures, the target EI and detector exposure, and thus the required mAs were typically 20% less postoptimisation. Image quality for some distal extremity exposures was improved by lowering kVp and increasing mAs around constant entrance skin dose. It is recommended that purchasing digital X-ray equipment with high detective quantum efficiency detectors, and then optimising the exposure chart for use with these detectors is of high importance for sites performing paediatric imaging. Multiple-exposure variables may need to be manipulated to achieve optimal outcomes.« less

  16. Integrating professionalism teaching into undergraduate medical education in the UK setting.

    PubMed

    Goldie, John

    2008-06-01

    This paper examines how professionalism teaching might be integrated into undergraduate medical education in the United Kingdom setting. It advocates adopting an outcome-based approach to curriculum planning, using the Scottish Deans' Medical Curriculum Group's (SDMCG) outcomes as a starting point. In discussing the curricular content, potential learning methods and strategies, theoretical considerations are explored. Student selection, assessment and strategies for optimising the educational environment are also considered.

  17. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  18. Ethanol generation, oxidation and energy production in a cooperative bioelectrochemical system.

    PubMed

    Pagnoncelli, Kamila C; Pereira, Andressa R; Sedenho, Graziela C; Bertaglia, Thiago; Crespilho, Frank N

    2018-08-01

    Integrating in situ biofuel production and energy conversion into a single system ensures the production of more robust networks as well as more renewable technologies. For this purpose, identifying and developing new biocatalysts is crucial. Herein, is reported a bioelectrochemical system consisting of alcohol dehydrogenase (ADH) and Saccharomyces cerevisiae, wherein both function cooperatively for ethanol production and its bioelectrochemical oxidation. Here, it is shown that it is possible to produce ethanol and use it as a biofuel in a tandem manner. The strategy is to employ flexible carbon fibres (FCF) electrode that could adsorb both the enzyme and the yeast cells. Glucose is used as a substrate for the yeast for the production of ethanol, while the enzyme is used to catalyse the oxidation of ethanol to acetaldehyde. Regarding the generation of reliable electricity based on electrochemical systems, the biosystem proposed in this study operates at a low temperature and ethanol production is proportional to the generated current. With further optimisation of electrode design, we envision the use of the cooperative biofuel cell for energy conversion and management of organic compounds. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. A Two-Locus Model of the Evolution of Insecticide Resistance to Inform and Optimise Public Health Insecticide Deployment Strategies

    PubMed Central

    2017-01-01

    We develop a flexible, two-locus model for the spread of insecticide resistance applicable to mosquito species that transmit human diseases such as malaria. The model allows differential exposure of males and females, allows them to encounter high or low concentrations of insecticide, and allows selection pressures and dominance values to differ depending on the concentration of insecticide encountered. We demonstrate its application by investigating the relative merits of sequential use of insecticides versus their deployment as a mixture to minimise the spread of resistance. We recover previously published results as subsets of this model and conduct a sensitivity analysis over an extensive parameter space to identify what circumstances favour mixtures over sequences. Both strategies lasted more than 500 mosquito generations (or about 40 years) in 24% of runs, while in those runs where resistance had spread to high levels by 500 generations, 56% favoured sequential use and 44% favoured mixtures. Mixtures are favoured when insecticide effectiveness (their ability to kill homozygous susceptible mosquitoes) is high and exposure (the proportion of mosquitoes that encounter the insecticide) is low. If insecticides do not reliably kill homozygous sensitive genotypes, it is likely that sequential deployment will be a more robust strategy. Resistance to an insecticide always spreads slower if that insecticide is used in a mixture although this may be insufficient to outperform sequential use: for example, a mixture may last 5 years while the two insecticides deployed individually may last 3 and 4 years giving an overall ‘lifespan’ of 7 years for sequential use. We emphasise that this paper is primarily about designing and implementing a flexible modelling strategy to investigate the spread of insecticide resistance in vector populations and demonstrate how our model can identify vector control strategies most likely to minimise the spread of insecticide resistance. PMID:28095406

  20. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. A hybrid credibility-based fuzzy multiple objective optimisation to differential pricing and inventory policies with arbitrage consideration

    NASA Astrophysics Data System (ADS)

    Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.

    2015-10-01

    In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.

  2. Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.

    PubMed

    García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M

    2014-12-01

    Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Cost optimisation and minimisation of the environmental impact through life cycle analysis of the waste water treatment plant of Bree (Belgium).

    PubMed

    De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B

    2011-01-01

    An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.

  4. Optimising design, operation and energy consumption of biological aerated filters (BAF) for nitrogen removal of municipal wastewater.

    PubMed

    Rother, E; Cornel, P

    2004-01-01

    The Biofiltration process in wastewater treatment combines filtration and biological processes in one reactor. In Europe it is meanwhile an accepted technology in advanced wastewater treatment, whenever space is scarce and a virtually suspended solids-free effluent is demanded. Although more than 500 plants are in operation world-wide there is still a lack of published operational experiences to help planners and operators to identify potentials for optimisation, e.g. energy consumption or the vulnerability against peakloads. Examples from pilot trials are given how the nitrification and denitrification can be optimised. Nitrification can be quickly increased by adjusting DO content of the water. Furthermore carrier materials like zeolites can store surplus ammonia during peak loads and release afterwards. Pre-denitrification in biofilters is normally limited by the amount of easily degradable organic substrate, resulting in relatively high requirements for external carbon. The combination of pre-DN, N and post-DN filters is much more advisable for most municipal wastewaters, because the recycle rate can be reduced and external carbon can be saved. Exemplarily it is shown for a full scale preanoxic-DN/N/postanoxic-DN plant of 130,000 p.e. how 15% energy could be saved by optimising internal recycling and some control strategies.

  5. Application of particle swarm optimisation for solving deteriorating inventory model with fluctuating demand and controllable deterioration rate

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Ren; Dye, Chung-Yuan

    2013-06-01

    In most of the inventory models in the literature, the deterioration rate of goods is viewed as an exogenous variable, which is not subject to control. In the real market, the retailer can reduce the deterioration rate of product by making effective capital investment in storehouse equipments. In this study, we formulate a deteriorating inventory model with time-varying demand by allowing preservation technology cost as a decision variable in conjunction with replacement policy. The objective is to find the optimal replenishment and preservation technology investment strategies while minimising the total cost over the planning horizon. For any given feasible replenishment scheme, we first prove that the optimal preservation technology investment strategy not only exists but is also unique. Then, a particle swarm optimisation is coded and used to solve the nonlinear programming problem by employing the properties derived from this article. Some numerical examples are used to illustrate the features of the proposed model.

  6. Elitist Binary Wolf Search Algorithm for Heuristic Feature Selection in High-Dimensional Bioinformatics Datasets.

    PubMed

    Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L

    2017-06-28

    Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.

  7. Automatic trajectory planning for low-thrust active removal mission in low-earth orbit

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2017-03-01

    In this paper two strategies are proposed to de-orbit up to 10 non-cooperative objects per year from the region within 800 and 1400 km altitude in Low Earth Orbit (LEO). The underlying idea is to use a single servicing spacecraft to de-orbit several objects applying two different approaches. The first strategy is analogous to the Traveling Salesman Problem: the servicing spacecraft rendezvous with multiple objects in order to physically attach a de-orbiting kit that reduces the perigee of the orbit. The second strategy is analogous to the Vehicle Routing Problem: the servicing spacecraft rendezvous and docks with an object, spirals it down to a lower altitude orbit, undocks, and then spirals up to the next target. In order to maximise the number of de-orbited objects with minimum propellant consumption, an optimal sequence of targets is identified using a bio-inspired incremental automatic planning and scheduling discrete optimisation algorithm. The optimisation of the resulting sequence is realised using a direct transcription method based on an asymptotic analytical solution of the perturbed Keplerian motion. The analytical model takes into account the perturbations deriving from the J2 gravitational effect and the atmospheric drag.

  8. A new way to improve the robustness of complex communication networks by allocating redundancy links

    NASA Astrophysics Data System (ADS)

    Shi, Chunhui; Peng, Yunfeng; Zhuo, Yue; Tang, Jieying; Long, Keping

    2012-03-01

    We investigate the robustness of complex communication networks on allocating redundancy links. The protecting key nodes (PKN) strategy is proposed to improve the robustness of complex communication networks against intentional attack. Our numerical simulations show that allocating a few redundant links among key nodes using the PKN strategy will significantly increase the robustness of scale-free complex networks. We have also theoretically proved and demonstrated the effectiveness of the PKN strategy. We expect that our work will help achieve a better understanding of communication networks.

  9. Identifying different learning styles to enhance the learning experience.

    PubMed

    Anderson, Irene

    2016-10-12

    Identifying your preferred learning style can be a useful way to optimise learning opportunities, and can help learners to recognise their strengths and areas for development in the way that learning takes place. It can also help teachers (educators) to recognise where additional activities are required to ensure the learning experience is robust and effective. There are several models available that may be used to identify learning styles. This article discusses these models and considers their usefulness in healthcare education. Models of teaching styles are also considered.

  10. Flow chemistry meets advanced functional materials.

    PubMed

    Myers, Rebecca M; Fitzpatrick, Daniel E; Turner, Richard M; Ley, Steven V

    2014-09-22

    Flow chemistry and continuous processing techniques are beginning to have a profound impact on the production of functional materials ranging from quantum dots, nanoparticles and metal organic frameworks to polymers and dyes. These techniques provide robust procedures which not only enable accurate control of the product material's properties but they are also ideally suited to conducting experiments on scale. The modular nature of flow and continuous processing equipment rapidly facilitates reaction optimisation and variation in function of the products. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Adaptive optimisation-offline cyber attack on remote state estimator

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Dong, Jiuxiang

    2017-10-01

    Security issues of cyber-physical systems have received increasing attentions in recent years. In this paper, deception attacks on the remote state estimator equipped with the chi-squared failure detector are considered, and it is assumed that the attacker can monitor and modify all the sensor data. A novel adaptive optimisation-offline cyber attack strategy is proposed, where using the current and previous sensor data, the attack can yield the largest estimation error covariance while ensuring to be undetected by the chi-squared monitor. From the attacker's perspective, the attack is better than the existing linear deception attacks to degrade the system performance. Finally, some numerical examples are provided to demonstrate theoretical results.

  12. The Translation of Knowledge into Practice in the Management of Atrial Fibrillation in Singapore.

    PubMed

    Woo, Fong Yeong Brigitte; Lim, Toon Wei; Tam, Wai San Wilson

    2018-03-12

    Atrial fibrillation (AF) is a clinically significant cardiac arrhythmia known to increase the risk of stroke by at least four times. Stroke-risk assessment and thromboprophylaxis are vital components in AF management. Guidelines are available to standardise AF management, but physicians' adherence to the recommended guidelines has been low. The aims were to: 1. Examine and compare the level of knowledge and current practice in AF management between cardiologists and non-cardiologist physicians in Singapore; 2. Identify physicians' perceived barriers to prescribing oral anticoagulants (OACs) when indicated; 3. Identify strategies to optimise AF management. From June 2017 to August 2017, a cross-sectional online survey involving physicians was conducted in Singapore. The survey instrument was adapted from a previously developed instrument, and validated locally by five cardiologists. It explored the physicians' stroke-risk assessment practices, estimation of stroke risk and benefits of anticoagulation, likelihood of prescribing anticoagulation when indicated, perceived barriers to anticoagulation, and strategies to optimise AF management. Sixty-three physicians completed the survey (14 cardiologists and 49 non-cardiologist physicians). No significant difference was found between cardiologists and non-cardiologist physicians in their assessment and estimation of stroke risk for stable AF patients. However, when presented with an AF patient with stroke risk, cardiologists were more likely than non-cardiologist physicians to prescribe novel OACs (93% vs. 51%; χ 2 =7.933, p=0.004). Compared to cardiologists, the majority of the non-cardiologist physicians thought the risk of falls were usually or always barriers to prescribing OACs (29% vs 69%; χ 2 =7.579, p=0.006). Among the suggested strategies to support them in AF management, physicians have overwhelmingly rated two as "quite useful" and "very useful": the establishment of clinics for monitoring anticoagulated patients (100%); and involvement of pharmacists in managing patients on warfarin (98.4%). Physicians possess good knowledge about stroke-risk assessment in AF patients yet it is not translated into effective measures for stroke prevention. Physicians, especially non-cardiologist ones, were not anticoagulating AF patients when indicated. Although novel OACs are safer alternatives to warfarin, non-cardiologist physicians were less inclined to use them for stroke prevention. All physicians opined that establishing anticoagulation clinics and collaborating with pharmacists were useful strategies to optimise AF management. Existing barriers to anticoagulation impeded the translation of knowledge into practice in the management of AF patients in Singapore, for which optimal strategies to optimise AF management are ascertained. Copyright © 2018 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  13. Optimising predictor domains for spatially coherent precipitation downscaling

    NASA Astrophysics Data System (ADS)

    Radanovics, S.; Vidal, J.-P.; Sauquet, E.; Ben Daoud, A.; Bontron, G.

    2013-10-01

    Statistical downscaling is widely used to overcome the scale gap between predictors from numerical weather prediction models or global circulation models and predictands like local precipitation, required for example for medium-term operational forecasts or climate change impact studies. The predictors are considered over a given spatial domain which is rarely optimised with respect to the target predictand location. In this study, an extended version of the growing rectangular domain algorithm is proposed to provide an ensemble of near-optimum predictor domains for a statistical downscaling method. This algorithm is applied to find five-member ensembles of near-optimum geopotential predictor domains for an analogue downscaling method for 608 individual target zones covering France. Results first show that very similar downscaling performances based on the continuous ranked probability score (CRPS) can be achieved by different predictor domains for any specific target zone, demonstrating the need for considering alternative domains in this context of high equifinality. A second result is the large diversity of optimised predictor domains over the country that questions the commonly made hypothesis of a common predictor domain for large areas. The domain centres are mainly distributed following the geographical location of the target location, but there are apparent differences between the windward and the lee side of mountain ridges. Moreover, domains for target zones located in southeastern France are centred more east and south than the ones for target locations on the same longitude. The size of the optimised domains tends to be larger in the southeastern part of the country, while domains with a very small meridional extent can be found in an east-west band around 47° N. Sensitivity experiments finally show that results are rather insensitive to the starting point of the optimisation algorithm except for zones located in the transition area north of this east-west band. Results also appear generally robust with respect to the archive length considered for the analogue method, except for zones with high interannual variability like in the Cévennes area. This study paves the way for defining regions with homogeneous geopotential predictor domains for precipitation downscaling over France, and therefore de facto ensuring the spatial coherence required for hydrological applications.

  14. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  16. Optimising the Use of Note-Taking as an External Cognitive Aid for Increasing Learning

    ERIC Educational Resources Information Center

    Makany, Tamas; Kemp, Jonathan; Dror, Itiel E.

    2009-01-01

    Taking notes is of uttermost importance in academic and commercial use and success. Different techniques for note-taking utilise different cognitive processes and strategies. This experimental study examined ways to enhance cognitive performance via different note-taking techniques. By comparing performances of traditional, linear style…

  17. Optimising the manufacture, formulation, and dose of antiretroviral drugs for more cost-efficient delivery in resource-limited settings: a consensus statement.

    PubMed

    Crawford, Keith W; Ripin, David H Brown; Levin, Andrew D; Campbell, Jennifer R; Flexner, Charles

    2012-07-01

    It is expected that funding limitations for worldwide HIV treatment and prevention in resource-limited settings will continue, and, because the need for treatment scale-up is urgent, the emphasis on value for money has become an increasing priority. The Conference on Antiretroviral Drug Optimization--a collaborative project between the Clinton Health Access Initiative, the Johns Hopkins University School of Medicine, and the Bill & Melinda Gates Foundation--brought together process chemists, clinical pharmacologists, pharmaceutical scientists, physicians, pharmacists, and regulatory specialists to explore strategies for the reduction of antiretroviral drug costs. The antiretroviral drugs discussed were prioritised for consideration on the basis of their market impact, and the objectives of the conference were framed as discussion questions generated to guide scientific assessment of potential strategies. These strategies included modifications to the synthesis of the active pharmaceutical ingredient (API) and use of cheaper sources of raw materials in synthesis of these ingredients. Innovations in product formulation could improve bioavailability thus needing less API. For several antiretroviral drugs, studies show efficacy is maintained at doses below the approved dose (eg, efavirenz, lopinavir plus ritonavir, atazanavir, and darunavir). Optimising pharmacoenhancement and extending shelf life are additional strategies. The conference highlighted a range of interventions; optimum cost savings could be achieved through combining approaches. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Incompressible SPH (ISPH) with fast Poisson solver on a GPU

    NASA Astrophysics Data System (ADS)

    Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.

    2018-05-01

    This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.

  19. Optimal design for robust control of uncertain flexible joint manipulators: a fuzzy dynamical system approach

    NASA Astrophysics Data System (ADS)

    Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang

    2018-04-01

    A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.

  20. Improved head direction command classification using an optimised Bayesian neural network.

    PubMed

    Nguyen, Son T; Nguyen, Hung T; Taylor, Philip B; Middleton, James

    2006-01-01

    Assistive technologies have recently emerged to improve the quality of life of severely disabled people by enhancing their independence in daily activities. Since many of those individuals have limited or non-existing control from the neck downward, alternative hands-free input modalities have become very important for these people to access assistive devices. In hands-free control, head movement has been proved to be a very effective user interface as it can provide a comfortable, reliable and natural way to access the device. Recently, neural networks have been shown to be useful not only for real-time pattern recognition but also for creating user-adaptive models. Since multi-layer perceptron neural networks trained using standard back-propagation may cause poor generalisation, the Bayesian technique has been proposed to improve the generalisation and robustness of these networks. This paper describes the use of Bayesian neural networks in developing a hands-free wheelchair control system. The experimental results show that with the optimised architecture, classification Bayesian neural networks can detect head commands of wheelchair users accurately irrespective to their levels of injuries.

  1. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  2. The role of predictive uncertainty in the operational management of reservoirs

    NASA Astrophysics Data System (ADS)

    Todini, E.

    2014-09-01

    The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.

  3. Combining two serological assays optimises sensitivity and specificity for the identification of Streptococcus equi subsp. equi exposure.

    PubMed

    Robinson, Carl; Steward, Karen F; Potts, Nicola; Barker, Colin; Hammond, Toni-ann; Pierce, Karen; Gunnarsson, Eggert; Svansson, Vilhjálmur; Slater, Josh; Newton, J Richard; Waller, Andrew S

    2013-08-01

    The detection of anti-Streptococcus equi antibodies in the blood serum of horses can assist with the identification of apparently healthy persistently infected carriers and the prevention of strangles outbreaks. The aim of the current study was to use genome sequencing data to develop an indirect enzyme linked immunosorbent assay (iELISA) that targets two S. equi-specific protein fragments. The sensitivity and specificity of the antigen A and antigen C iELISAs were compared to an SeM-based iELISA marketed by IDvet - diagnostic Vétérinaire (IDvet). Individually, each assay compromised specificity in order to achieve sufficient sensitivity (SeM iELISA had a sensitivity of 89.9%, but a specificity of only 77.0%) or sensitivity to achieve high specificity. However, combining the results of the antigen A and antigen C iELISAs permitted optimisation of both sensitivity (93.3%) and specificity (99.3%), providing a robust assay for the identification of horses exposed to S. equi. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Midbond basis functions for weakly bound complexes

    NASA Astrophysics Data System (ADS)

    Shaw, Robert A.; Hill, J. Grant

    2018-06-01

    Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.

  5. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less

  6. Optimality Principles in the Regulation of Metabolic Networks

    PubMed Central

    Berkhout, Jan; Bruggeman, Frank J.; Teusink, Bas

    2012-01-01

    One of the challenging tasks in systems biology is to understand how molecular networks give rise to emergent functionality and whether universal design principles apply to molecular networks. To achieve this, the biophysical, evolutionary and physiological constraints that act on those networks need to be identified in addition to the characterisation of the molecular components and interactions. Then, the cellular “task” of the network—its function—should be identified. A network contributes to organismal fitness through its function. The premise is that the same functions are often implemented in different organisms by the same type of network; hence, the concept of design principles. In biology, due to the strong forces of selective pressure and natural selection, network functions can often be understood as the outcome of fitness optimisation. The hypothesis of fitness optimisation to understand the design of a network has proven to be a powerful strategy. Here, we outline the use of several optimisation principles applied to biological networks, with an emphasis on metabolic regulatory networks. We discuss the different objective functions and constraints that are considered and the kind of understanding that they provide. PMID:24957646

  7. Optimality principles in the regulation of metabolic networks.

    PubMed

    Berkhout, Jan; Bruggeman, Frank J; Teusink, Bas

    2012-08-29

    One of the challenging tasks in systems biology is to understand how molecular networks give rise to emergent functionality and whether universal design principles apply to molecular networks. To achieve this, the biophysical, evolutionary and physiological constraints that act on those networks need to be identified in addition to the characterisation of the molecular components and interactions. Then, the cellular "task" of the network-its function-should be identified. A network contributes to organismal fitness through its function. The premise is that the same functions are often implemented in different organisms by the same type of network; hence, the concept of design principles. In biology, due to the strong forces of selective pressure and natural selection, network functions can often be understood as the outcome of fitness optimisation. The hypothesis of fitness optimisation to understand the design of a network has proven to be a powerful strategy. Here, we outline the use of several optimisation principles applied to biological networks, with an emphasis on metabolic regulatory networks. We discuss the different objective functions and constraints that are considered and the kind of understanding that they provide.

  8. A new bio-inspired optimisation algorithm: Bird Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Xian-Bing; Gao, X. Z.; Lu, Lihua; Liu, Yu; Zhang, Hengzhen

    2016-07-01

    A new bio-inspired algorithm, namely Bird Swarm Algorithm (BSA), is proposed for solving optimisation applications. BSA is based on the swarm intelligence extracted from the social behaviours and social interactions in bird swarms. Birds mainly have three kinds of behaviours: foraging behaviour, vigilance behaviour and flight behaviour. Birds may forage for food and escape from the predators by the social interactions to obtain a high chance of survival. By modelling these social behaviours, social interactions and the related swarm intelligence, four search strategies associated with five simplified rules are formulated in BSA. Simulations and comparisons based on eighteen benchmark problems demonstrate the effectiveness, superiority and stability of BSA. Some proposals for future research about BSA are also discussed.

  9. Optimization of robustness of interdependent network controllability by redundant design

    PubMed Central

    2018-01-01

    Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426

  10. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  11. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  12. Developing Intervention Strategies to Optimise Body Composition in Early Childhood in South Africa

    PubMed Central

    Tomaz, Simone A.; Stone, Matthew; Hinkley, Trina; Jones, Rachel A.; Louw, Johann; Twine, Rhian; Kahn, Kathleen; Norris, Shane A.

    2017-01-01

    Purpose. The purpose of this research was to collect data to inform intervention strategies to optimise body composition in South African preschool children. Methods. Data were collected in urban and rural settings. Weight status, physical activity, and gross motor skill assessments were conducted with 341 3–6-year-old children, and 55 teachers and parents/caregivers participated in focus groups. Results. Overweight and obesity were a concern in low-income urban settings (14%), but levels of physical activity and gross motor skills were adequate across all settings. Focus group findings from urban and rural settings indicated that teachers would welcome input on leading activities to promote physical activity and gross motor skill development. Teachers and parents/caregivers were also positive about young children being physically active. Recommendations for potential intervention strategies include a teacher-training component, parent/child activity mornings, and a home-based component for parents/caregivers. Conclusion. The findings suggest that an intervention focussed on increasing physical activity and improving gross motor skills per se is largely not required but that contextually relevant physical activity and gross motor skills may still be useful for promoting healthy weight and a vehicle for engaging with teachers and parents/caregivers for promoting other child outcomes, such as cognitive development. PMID:28194417

  13. Optimising Laser Tattoo Removal

    PubMed Central

    Sardana, Kabir; Ranjan, Rashmi; Ghunawat, Sneha

    2015-01-01

    Lasers are the standard modality for tattoo removal. Though there are various factors that determine the results, we have divided them into three logical headings, laser dependant factors such as type of laser and beam modifications, tattoo dependent factors like size and depth, colour of pigment and lastly host dependent factors, which includes primarily the presence of a robust immune response. Modifications in the existing techniques may help in better clinical outcome with minimal risk of complications. This article provides an insight into some of these techniques along with a detailed account of the factors involved in tattoo removal. PMID:25949018

  14. A Strategy-Based Approach towards Optimising Research Output

    ERIC Educational Resources Information Center

    Lues, L.

    2013-01-01

    The South African higher education fraternity has experienced an outflow of senior research capacity during the past decade, resulting in a large influx of younger and less-published academics. More emphasis is therefore placed on the role of the central institution in ensuring research output. The Faculty of Economic and Management Sciences at a…

  15. "Talent Circulators" in Shanghai: Return Migrants and Their Strategies for Success

    ERIC Educational Resources Information Center

    Huang, Yedan; Kuah-Pearce, Khun Eng

    2015-01-01

    This paper argues for a flexible identity and citizenship framework to explore how return migrants, "haigui," have readapted and re-established themselves back into Shanghai society, and how they have used their talents, knowledge and "guanxi" networks to optimise their chances of success. It argues that these return migrants,…

  16. Autonomy, Professionalism and Management Structure in the German University System

    ERIC Educational Resources Information Center

    Bayer, Ingo

    2011-01-01

    Declining public finances and ever increasing national and international competition force state-owned German universities to adapt to an increasingly competitive environment. In a first phase the universities have concentrated their efforts on the optimisation of budgeting processes and on the development of strategies and goals to come to a more…

  17. Modelling of transitions between L- and H-mode in JET high plasma current plasmas and application to ITER scenarios including tungsten behaviour

    NASA Astrophysics Data System (ADS)

    Koechl, F.; Loarte, A.; Parail, V.; Belo, P.; Brix, M.; Corrigan, G.; Harting, D.; Koskela, T.; Kukushkin, A. S.; Polevoi, A. R.; Romanelli, M.; Saibene, G.; Sartori, R.; Eich, T.; Contributors, JET

    2017-08-01

    The dynamics for the transition from L-mode to a stationary high Q DT H-mode regime in ITER is expected to be qualitatively different to present experiments. Differences may be caused by a low fuelling efficiency of recycling neutrals, that influence the post transition plasma density evolution on the one hand. On the other hand, the effect of the plasma density evolution itself both on the alpha heating power and the edge power flow required to sustain the H-mode confinement itself needs to be considered. This paper presents results of modelling studies of the transition to stationary high Q DT H-mode regime in ITER with the JINTRAC suite of codes, which include optimisation of the plasma density evolution to ensure a robust achievement of high Q DT regimes in ITER on the one hand and the avoidance of tungsten accumulation in this transient phase on the other hand. As a first step, the JINTRAC integrated models have been validated in fully predictive simulations (excluding core momentum transport which is prescribed) against core, pedestal and divertor plasma measurements in JET C-wall experiments for the transition from L-mode to stationary H-mode in partially ITER relevant conditions (highest achievable current and power, H 98,y ~ 1.0, low collisionality, comparable evolution in P net/P L-H, but different ρ *, T i/T e, Mach number and plasma composition compared to ITER expectations). The selection of transport models (core: NCLASS  +  Bohm/gyroBohm in L-mode/GLF23 in H-mode) was determined by a trade-off between model complexity and efficiency. Good agreement between code predictions and measured plasma parameters is obtained if anomalous heat and particle transport in the edge transport barrier are assumed to be reduced at different rates with increasing edge power flow normalised to the H-mode threshold; in particular the increase in edge plasma density is dominated by this edge transport reduction as the calculated neutral influx across the separatrix remains unchanged (or even slightly decreases) following the H-mode transition. JINTRAC modelling of H-mode transitions for the ITER 15 MA / 5.3 T high Q DT scenarios with the same modelling assumptions as those being derived from JET experiments has been carried out. The modelling finds that it is possible to access high Q DT conditions robustly for additional heating power levels of P AUX  ⩾  53 MW by optimising core and edge plasma fuelling in the transition from L-mode to high Q DT H-mode. An initial period of low plasma density, in which the plasma accesses the H-mode regime and the alpha heating power increases, needs to be considered after the start of the additional heating, which is then followed by a slow density ramp. Both the duration of the low density phase and the density ramp-rate depend on boundary and operational conditions and can be optimised to minimise the resistive flux consumption in this transition phase. The modelling also shows that fuelling schemes optimised for a robust access to high Q DT H-mode in ITER are also optimum for the prevention of the contamination of the core plasma by tungsten during this phase.

  18. Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels

    NASA Astrophysics Data System (ADS)

    Vandersickel, A.; Wright, Y. M.; Boulouchos, K.

    2013-12-01

    Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.

  19. Wind energy resource modelling in Portugal and its future large-scale alteration due to anthropogenic induced climate changes =

    NASA Astrophysics Data System (ADS)

    Carvalho, David Joao da Silva

    The high dependence of Portugal from foreign energy sources (mainly fossil fuels), together with the international commitments assumed by Portugal and the national strategy in terms of energy policy, as well as resources sustainability and climate change issues, inevitably force Portugal to invest in its energetic self-sufficiency. The 20/20/20 Strategy defined by the European Union defines that in 2020 60% of the total electricity consumption must come from renewable energy sources. Wind energy is currently a major source of electricity generation in Portugal, producing about 23% of the national total electricity consumption in 2013. The National Energy Strategy 2020 (ENE2020), which aims to ensure the national compliance of the European Strategy 20/20/20, states that about half of this 60% target will be provided by wind energy. This work aims to implement and optimise a numerical weather prediction model in the simulation and modelling of the wind energy resource in Portugal, both in offshore and onshore areas. The numerical model optimisation consisted in the determination of which initial and boundary conditions and planetary boundary layer physical parameterizations options provide wind power flux (or energy density), wind speed and direction simulations closest to in situ measured wind data. Specifically for offshore areas, it is also intended to evaluate if the numerical model, once optimised, is able to produce power flux, wind speed and direction simulations more consistent with in situ measured data than wind measurements collected by satellites. This work also aims to study and analyse possible impacts that anthropogenic climate changes may have on the future wind energetic resource in Europe. The results show that the ECMWF reanalysis ERA-Interim are those that, among all the forcing databases currently available to drive numerical weather prediction models, allow wind power flux, wind speed and direction simulations more consistent with in situ wind measurements. It was also found that the Pleim-Xiu and ACM2 planetary boundary layer parameterizations are the ones that showed the best performance in terms of wind power flux, wind speed and direction simulations. This model optimisation allowed a significant reduction of the wind power flux, wind speed and direction simulations errors and, specifically for offshore areas, wind power flux, wind speed and direction simulations more consistent with in situ wind measurements than data obtained from satellites, which is a very valuable and interesting achievement. This work also revealed that future anthropogenic climate changes can negatively impact future European wind energy resource, due to tendencies towards a reduction in future wind speeds especially by the end of the current century and under stronger radiative forcing conditions.

  20. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  1. Integration of PGD-virtual charts into an engineering design process

    NASA Astrophysics Data System (ADS)

    Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic

    2016-04-01

    This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.

  2. An analytical fuzzy-based approach to ?-gain optimal control of input-affine nonlinear systems using Newton-type algorithm

    NASA Astrophysics Data System (ADS)

    Milic, Vladimir; Kasac, Josip; Novakovic, Branko

    2015-10-01

    This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.

  3. Radiation-Induced Noncancer Risks in Interventional Cardiology: Optimisation of Procedures and Staff and Patient Dose Reduction

    PubMed Central

    Khairuddin Md Yusof, Ahmad

    2013-01-01

    Concerns about ionizing radiation during interventional cardiology have been increased in recent years as a result of rapid growth in interventional procedure volumes and the high radiation doses associated with some procedures. Noncancer radiation risks to cardiologists and medical staff in terms of radiation-induced cataracts and skin injuries for patients appear clear potential consequences of interventional cardiology procedures, while radiation-induced potential risk of developing cardiovascular effects remains less clear. This paper provides an overview of the evidence-based reviews of concerns about noncancer risks of radiation exposure in interventional cardiology. Strategies commonly undertaken to reduce radiation doses to both medical staff and patients during interventional cardiology procedures are discussed; optimisation of interventional cardiology procedures is highlighted. PMID:24027768

  4. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  5. Optimal tyre usage for a Formula One car

    NASA Astrophysics Data System (ADS)

    Tremlett, A. J.; Limebeer, D. J. N.

    2016-10-01

    Variations in track temperature, surface conditions and layout have led tyre manufacturers to produce a range of rubber compounds for race events. Each compound has unique friction and durability characteristics. Efficient tyre management over a full race distance is a crucial component of a competitive race strategy. A minimum lap time optimal control calculation and a thermodynamic tyre wear model are used to establish optimal tyre warming and tyre usage strategies. Lap time sensitivities demonstrate that relatively small changes in control strategy can lead to significant reductions in the associated wear metrics. The illustrated methodology shows how vehicle setup parameters can be optimised for minimum tyre usage.

  6. Behavioural surveillance: the value of national coordination

    PubMed Central

    McGarrigle, C; Fenton, K; Gill, O; Hughes, G; Morgan, D; Evans, B

    2002-01-01

    Behavioural surveillance programmes have enabled the description of population patterns of risk behaviours for STI and HIV transmission and aid in the understanding of how epidemics of STI are generated. They have been instrumental in helping to refine public health interventions and inform the targeting of sexual health promotion and disease control strategies. The formalisation and coordination of behavioural surveillance in England and Wales could optimise our ability to measure the impact of interventions and health promotion strategies on behaviour. This will be particularly useful for monitoring the progress towards specific disease control targets set in the Department of Health's new Sexual Health and HIV Strategy. PMID:12473798

  7. Creating a Culture: How School Leaders Can Optimise Behaviour

    ERIC Educational Resources Information Center

    Bennett, Tom

    2017-01-01

    The national picture of school behaviour is complex, but numerous indicators suggest that it can be better in a great number of schools and contexts. Every leader should consciously aspire to the very best behaviour possible in their schools as a matter of priority. There are a number of strategies that schools with outstanding behaviour use…

  8. Red flags: a case series of clinician-family communication challenges in the context of CHD.

    PubMed

    Sekar, Priya; Marcus, Katie L; Williams, Erin P; Boss, Renee D

    2017-07-01

    We describe three cases of newborns with complex CHD characterised by communication challenges. These communication challenges were categorised as patient, family, or system-related red flags. Strategies for addressing these red flags were proposed, for the goal of optimising care and improving quality of life in this vulnerable population.

  9. Optimising the Efficacy of Hybrid Academic Teams: Lessons from a Systematic Review Process

    ERIC Educational Resources Information Center

    Lake, Warren; Wallin, Margie; Boyd, Bill; Woolcott, Geoff; Markopoulos, Christos; Boyd, Wendy; Foster, Alan

    2018-01-01

    Undertaking a systematic review can have many benefits, beyond any theoretical or conceptual discoveries pertaining to the underlying research question. This paper explores the value of utilising a hybrid academic team when undertaking the systematic review process, and shares a range of practical strategies. The paper also comments on how such a…

  10. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  11. Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context

    NASA Astrophysics Data System (ADS)

    Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian

    2016-05-01

    The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.

  12. A novel swarm intelligence algorithm for finding DNA motifs.

    PubMed

    Lei, Chengwei; Ruan, Jianhua

    2009-01-01

    Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.

  13. Efficient processing of multiple nested event pattern queries over multi-dimensional event streams based on a triaxial hierarchical model.

    PubMed

    Xiao, Fuyuan; Aritsugi, Masayoshi; Wang, Qing; Zhang, Rong

    2016-09-01

    For efficient and sophisticated analysis of complex event patterns that appear in streams of big data from health care information systems and support for decision-making, a triaxial hierarchical model is proposed in this paper. Our triaxial hierarchical model is developed by focusing on hierarchies among nested event pattern queries with an event concept hierarchy, thereby allowing us to identify the relationships among the expressions and sub-expressions of the queries extensively. We devise a cost-based heuristic by means of the triaxial hierarchical model to find an optimised query execution plan in terms of the costs of both the operators and the communications between them. According to the triaxial hierarchical model, we can also calculate how to reuse the results of the common sub-expressions in multiple queries. By integrating the optimised query execution plan with the reuse schemes, a multi-query optimisation strategy is developed to accomplish efficient processing of multiple nested event pattern queries. We present empirical studies in which the performance of multi-query optimisation strategy was examined under various stream input rates and workloads. Specifically, the workloads of pattern queries can be used for supporting monitoring patients' conditions. On the other hand, experiments with varying input rates of streams can correspond to changes of the numbers of patients that a system should manage, whereas burst input rates can correspond to changes of rushes of patients to be taken care of. The experimental results have shown that, in Workload 1, our proposal can improve about 4 and 2 times throughput comparing with the relative works, respectively; in Workload 2, our proposal can improve about 3 and 2 times throughput comparing with the relative works, respectively; in Workload 3, our proposal can improve about 6 times throughput comparing with the relative work. The experimental results demonstrated that our proposal was able to process complex queries efficiently which can support health information systems and further decision-making. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A world without bacterial meningitis: how genomic epidemiology can inform vaccination strategy.

    PubMed

    Rodrigues, Charlene M C; Maiden, Martin C J

    2018-01-01

    Bacterial meningitis remains an important cause of global morbidity and mortality. Although effective vaccinations exist and are being increasingly used worldwide, bacterial diversity threatens their impact and the ultimate goal of eliminating the disease. Through genomic epidemiology, we can appreciate bacterial population structure and its consequences for transmission dynamics, virulence, antimicrobial resistance, and development of new vaccines. Here, we review what we have learned through genomic epidemiological studies, following the rapid implementation of whole genome sequencing that can help to optimise preventative strategies for bacterial meningitis.

  15. An Integrated Environmental Assessment of Green and Gray Infrastructure Strategies for Robust Decision Making.

    PubMed

    Casal-Campos, Arturo; Fu, Guangtao; Butler, David; Moore, Andrew

    2015-07-21

    The robustness of a range of watershed-scale "green" and "gray" drainage strategies in the future is explored through comprehensive modeling of a fully integrated urban wastewater system case. Four socio-economic future scenarios, defined by parameters affecting the environmental performance of the system, are proposed to account for the uncertain variability of conditions in the year 2050. A regret-based approach is applied to assess the relative performance of strategies in multiple impact categories (environmental, economic, and social) as well as to evaluate their robustness across future scenarios. The concept of regret proves useful in identifying performance trade-offs and recognizing states of the world most critical to decisions. The study highlights the robustness of green strategies (particularly rain gardens, resulting in half the regret of most options) over end-of-pipe gray alternatives (surface water separation or sewer and storage rehabilitation), which may be costly (on average, 25% of the total regret of these options) and tend to focus on sewer flooding and CSO alleviation while compromising on downstream system performance (this accounts for around 50% of their total regret). Trade-offs and scenario regrets observed in the analysis suggest that the combination of green and gray strategies may still offer further potential for robustness.

  16. The current situation in education and training of health-care professionals across Africa to optimise the delivery of palliative care for cancer patients

    PubMed Central

    Rawlinson, FM; Gwyther, L; Kiyange, F; Luyirika, E; Meiring, M; Downing, J

    2014-01-01

    The need for palliative care education remains vital to contribute to the quality of life of patients, both adults and children, with cancer in Africa. The number of patients with cancer continues to rise, and with them the burden of palliative care needs. Palliative care has been present in Africa for nearly four decades, and a number of services are developing in response to the HIV/AIDS epidemic. However, the needs of cancer patients remain a challenge. Education and training initiatives have developed throughout this time, using a combination of educational methods, including, more recently, e-learning initiatives. The role of international and national organisations in supporting education has been pivotal in developing models of education and training that are robust, sustainable, and affordable. Developing a material for education and professional development needs to continue in close collaboration with that already in production in order to optimise available resources. Seeking ways to evaluate programmes in terms of their impact on patient care remains an important part of programme delivery. This article reviews the current situation. PMID:25624873

  17. Model of head-neck joint fast movements in the frontal plane.

    PubMed

    Pedrocchi, A; Ferrigno, G

    2004-06-01

    The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.

  18. Capacity-optimized mp2 audio watermarking

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Dittmann, Jana

    2003-06-01

    Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.

  19. A radiobiology-based inverse treatment planning method for optimisation of permanent l-125 prostate implants in focal brachytherapy

    NASA Astrophysics Data System (ADS)

    Haworth, Annette; Mears, Christopher; Betts, John M.; Reynolds, Hayley M.; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A.

    2016-01-01

    Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The ‘biological optimisation’ considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.

  20. The tensor network theory library

    NASA Astrophysics Data System (ADS)

    Al-Assam, S.; Clark, S. R.; Jaksch, D.

    2017-09-01

    In this technical paper we introduce the tensor network theory (TNT) library—an open-source software project aimed at providing a platform for rapidly developing robust, easy to use and highly optimised code for TNT calculations. The objectives of this paper are (i) to give an overview of the structure of TNT library, and (ii) to help scientists decide whether to use the TNT library in their research. We show how to employ the TNT routines by giving examples of ground-state and dynamical calculations of one-dimensional bosonic lattice system. We also discuss different options for gaining access to the software available at www.tensornetworktheory.org.

  1. The evolution of acute burn care - retiring the split skin graft.

    PubMed

    Greenwood, J E

    2017-07-01

    The skin graft was born in 1869 and since then, surgeons have been using split skin grafts for wound repair. Nevertheless, this asset fails the big burn patient, who deserves an elastic, mobile and robust outcome but who receives the poorest possible outcome based on donor site paucity. Negating the need for the skin graft requires an autologous composite cultured skin and a material capable of temporising the burn wound for four weeks until the composite is produced. A novel, biodegradable polyurethane chemistry has been used to create two such products. This paper describes the design, production, optimisation and evaluation of several iterations of these products. The evaluation has occurred in a variety of models, both in vitro and in vivo, employing Hunterian scientific principles, and embracing Hunter's love and appreciation of comparative anatomy. The process has culminated in significant human experience in complex wounds and extensive burn injury. Used serially, the products offer robust and elastic healing in deep burns of any size within 6 weeks of injury.

  2. Virtual tryout planning in automotive industry based on simulation metamodels

    NASA Astrophysics Data System (ADS)

    Harsch, D.; Heingärtner, J.; Hortig, D.; Hora, P.

    2016-11-01

    Deep drawn sheet metal parts are increasingly designed to the feasibility limit, thus achieving a robust manufacturing is often challenging. The fluctuation of process and material properties often lead to robustness problems. Therefore, numerical simulations are used to detect the critical regions. To enhance the agreement with the real process conditions, the material data are acquired through a variety of experiments. Furthermore, the force distribution is taken into account. The simulation metamodel contains the virtual knowledge of a particular forming process, which is determined based on a series of finite element simulations with variable input parameters. Based on the metamodels, virtual process windows can be displayed for different configurations. This helps to improve the operating point as well as to adjust process settings in case the process becomes unstable. Furthermore, the time of tool tryout can be shortened due to transfer of the virtual knowledge contained in the metamodels on the optimisation of the drawbeads. This allows the tool manufacturer to focus on the essential, to save time and to recognize complex relationships.

  3. Recruitment and retention of young women into nutrition research studies: practical considerations.

    PubMed

    Leonard, Alecia; Hutchesson, Melinda; Patterson, Amanda; Chalmers, Kerry; Collins, Clare

    2014-01-16

    Successful recruitment and retention of participants into research studies is critical for optimising internal and external validity. Research into diet and lifestyle of young women is important due to the physiological transitions experienced at this life stage. This paper aims to evaluate data related to recruitment and retention across three research studies with young women, and present practical advice related to recruiting and retaining young women in order to optimise study quality within nutrition research. Recruitment and retention strategies used in three nutrition studies that targeted young women (18 to 35 years) were critiqued. A randomised controlled trial (RCT), a crossover validation study and a cross-sectional survey were conducted at the University of Newcastle, Australia between 2010 and 2013Successful recruitment was defined as maximum recruitment relative to time. Retention was assessed as maximum participants remaining enrolled at study completion. Recruitment approaches included notice boards, web and social network sites (Facebook and Twitter), with social media most successful in recruitment. The online survey had the highest recruitment in the shortest time-frame (751 participants in one month). Email, phone and text message were used in study one (RCT) and study two (crossover validation) and assisted in low attrition rates, with 93% and 75.7% completing the RCT and crossover validation study respectively. Of those who did not complete the RCT, reported reasons were: being too busy; and having an unrelated illness. Recruiting young women into nutrition research is challenging. Use of social media enhances recruitment, while Email, phone and text message contact improves retention within interventions. Further research comparing strategies to optimise recruitment and retention in young women, including flexible testing times, reminders and incentives is warranted.

  4. Cell population heterogeneity and evolution towards drug resistance in cancer: Biological and mathematical assessment, theoretical treatment optimisation.

    PubMed

    Chisholm, Rebecca H; Lorenzi, Tommaso; Clairambault, Jean

    2016-11-01

    Drug-induced drug resistance in cancer has been attributed to diverse biological mechanisms at the individual cell or cell population scale, relying on stochastically or epigenetically varying expression of phenotypes at the single cell level, and on the adaptability of tumours at the cell population level. We focus on intra-tumour heterogeneity, namely between-cell variability within cancer cell populations, to account for drug resistance. To shed light on such heterogeneity, we review evolutionary mechanisms that encompass the great evolution that has designed multicellular organisms, as well as smaller windows of evolution on the time scale of human disease. We also present mathematical models used to predict drug resistance in cancer and optimal control methods that can circumvent it in combined therapeutic strategies. Plasticity in cancer cells, i.e., partial reversal to a stem-like status in individual cells and resulting adaptability of cancer cell populations, may be viewed as backward evolution making cancer cell populations resistant to drug insult. This reversible plasticity is captured by mathematical models that incorporate between-cell heterogeneity through continuous phenotypic variables. Such models have the benefit of being compatible with optimal control methods for the design of optimised therapeutic protocols involving combinations of cytotoxic and cytostatic treatments with epigenetic drugs and immunotherapies. Gathering knowledge from cancer and evolutionary biology with physiologically based mathematical models of cell population dynamics should provide oncologists with a rationale to design optimised therapeutic strategies to circumvent drug resistance, that still remains a major pitfall of cancer therapeutics. This article is part of a Special Issue entitled "System Genetics" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. The Influence of Organisational Commitment, Job Involvement and Utility Perceptions on Trainees' Motivation to Improve Work through Learning

    ERIC Educational Resources Information Center

    von Treuer, Kathryn; McHardy, Katherine; Earl, Celisha

    2013-01-01

    Workplace training is a key strategy often used by organisations to optimise performance. Further, trainee motivation is a key determinant of the degree to which the material learned in a training programme will be transferred to the workplace, enhancing the performance of the trainee. This study investigates the relationship between several…

  6. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  7. Use of game-theoretical methods in biochemistry and biophysics.

    PubMed

    Schuster, Stefan; Kreft, Jan-Ulrich; Schroeter, Anja; Pfeiffer, Thomas

    2008-04-01

    Evolutionary game theory can be considered as an extension of the theory of evolutionary optimisation in that two or more organisms (or more generally, units of replication) tend to optimise their properties in an interdependent way. Thus, the outcome of the strategy adopted by one species (e.g., as a result of mutation and selection) depends on the strategy adopted by the other species. In this review, the use of evolutionary game theory for analysing biochemical and biophysical systems is discussed. The presentation is illustrated by a number of instructive examples such as the competition between microorganisms using different metabolic pathways for adenosine triphosphate production, the secretion of extracellular enzymes, the growth of trees and photosynthesis. These examples show that, due to conflicts of interest, the global optimum (in the sense of being the best solution for the whole system) is not always obtained. For example, some yeast species use metabolic pathways that waste nutrients, and in a dense tree canopy, trees grow taller than would be optimal for biomass productivity. From the viewpoint of game theory, the examples considered can be described by the Prisoner's Dilemma, snowdrift game, Tragedy of the Commons and rock-scissors-paper game.

  8. Vibration isolation design for periodically stiffened shells by the wave finite element method

    NASA Astrophysics Data System (ADS)

    Hong, Jie; He, Xueqing; Zhang, Dayi; Zhang, Bing; Ma, Yanhong

    2018-04-01

    Periodically stiffened shell structures are widely used due to their excellent specific strength, in particular for aeronautical and astronautical components. This paper presents an improved Wave Finite Element Method (FEM) that can be employed to predict the band-gap characteristics of stiffened shell structures efficiently. An aero-engine casing, which is a typical periodically stiffened shell structure, was employed to verify the validation and efficiency of the Wave FEM. Good agreement has been found between the Wave FEM and the classical FEM for different boundary conditions. One effective wave selection method based on the Wave FEM has thus been put forward to filter the radial modes of a shell structure. Furthermore, an optimisation strategy by the combination of the Wave FEM and genetic algorithm was presented for periodically stiffened shell structures. The optimal out-of-plane band gap and the mass of the whole structure can be achieved by the optimisation strategy under an aerodynamic load. Results also indicate that geometric parameters of stiffeners can be properly selected that the out-of-plane vibration attenuates significantly in the frequency band of interest. This study can provide valuable references for designing the band gaps of vibration isolation.

  9. Facial Expression Training Optimises Viewing Strategy in Children and Adults

    PubMed Central

    Pollux, Petra M. J.; Hall, Sophie; Guo, Kun

    2014-01-01

    This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions. PMID:25144680

  10. TU-H-CAMPUS-JeP3-01: Towards Robust Adaptive Radiation Therapy Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boeck, M; KTH Royal Institute of Technology, Stockholm; Eriksson, K

    Purpose: To set up a framework combining robust treatment planning with adaptive reoptimization in order to maintain high treatment quality, to respond to interfractional variations and to identify those patients who will benefit the most from an adaptive fractionation schedule. Methods: We propose adaptive strategies based on stochastic minimax optimization for a series of simulated treatments on a one-dimensional patient phantom. The plan should be able to handle anticipated systematic and random errors and is applied during the first fractions. Information on the individual geometric variations is gathered at each fraction. At scheduled fractions, the impact of the measured errorsmore » on the delivered dose distribution is evaluated. For a patient that receives a dose that does not satisfy specified plan quality criteria, the plan is reoptimized based on these individual measurements using one of three different adaptive strategies. The reoptimized plan is then applied during future fractions until a new scheduled adaptation becomes necessary. In the first adaptive strategy the measured systematic and random error scenarios and their assigned probabilities are updated to guide the robust reoptimization. The focus of the second strategy lies on variation of the fraction of the worst scenarios taken into account during robust reoptimization. In the third strategy the uncertainty margins around the target are recalculated with the measured errors. Results: By studying the effect of the three adaptive strategies combined with various adaptation schedules on the same patient population, the group which benefits from adaptation is identified together with the most suitable strategy and schedule. Preliminary computational results indicate when and how best to adapt for the three different strategies. Conclusion: A workflow is presented that provides robust adaptation of the treatment plan throughout the course of treatment and useful measures to identify patients in need for an adaptive treatment strategy.« less

  11. Explicit reference governor for linear systems

    NASA Astrophysics Data System (ADS)

    Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo

    2018-06-01

    The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.

  12. Climate change adaptation and Integrated Water Resource Management in the water sector

    NASA Astrophysics Data System (ADS)

    Ludwig, Fulco; van Slobbe, Erik; Cofino, Wim

    2014-10-01

    Integrated Water Resources Management (IWRM) was introduced in 1980s to better optimise water uses between different water demanding sectors. However, since it was introduced water systems have become more complicated due to changes in the global water cycle as a result of climate change. The realization that climate change will have a significant impact on water availability and flood risks has driven research and policy making on adaptation. This paper discusses the main similarities and differences between climate change adaptation and IWRM. The main difference between the two is the focus on current and historic issues of IWRM compared to the (long-term) future focus of adaptation. One of the main problems of implementing climate change adaptation is the large uncertainties in future projections. Two completely different approaches to adaptation have been developed in response to these large uncertainties. A top-down approach based on large scale biophysical impacts analyses focussing on quantifying and minimizing uncertainty by using a large range of scenarios and different climate and impact models. The main problem with this approach is the propagation of uncertainties within the modelling chain. The opposite is the bottom up approach which basically ignores uncertainty. It focusses on reducing vulnerabilities, often at local scale, by developing resilient water systems. Both these approaches however are unsuitable for integrating into water management. The bottom up approach focuses too much on socio-economic vulnerability and too little on developing (technical) solutions. The top-down approach often results in an “explosion” of uncertainty and therefore complicates decision making. A more promising direction of adaptation would be a risk based approach. Future research should further develop and test an approach which starts with developing adaptation strategies based on current and future risks. These strategies should then be evaluated using a range of future scenarios in order to develop robust adaptation measures and strategies.

  13. Breaking through the uncertainty ceiling in LA-ICP-MS U-Pb geochronology

    NASA Astrophysics Data System (ADS)

    Horstwood, M.

    2016-12-01

    Sources of systematic uncertainty associated with session-to-session bias are the dominant contributor to the 2% (2s) uncertainty ceiling that currently limits the accuracy of LA-ICP-MS U-Pb geochronology. Sources include differential downhole fractionation (LIEF), `matrix effects' and ablation volume differences, which result in irreproducibility of the same reference material across sessions. Current mitigation methods include correcting for LIEF mathematically, using matrix-matched reference materials, annealing material to reduce or eliminate radiation damage effects and tuning for robust plasma conditions. Reducing the depth and volume of ablation can also mitigate these problems and should contribute to the reduction of the uncertainty ceiling. Reducing analysed volume leads to increased detection efficiency, reduced matrix-effects, eliminates LIEF, obviates ablation rate differences and reduces the likelihood of intercepting complex growth zones with depth, thereby apparently improving material homogeneity. High detection efficiencies (% level) and low sampling volumes (20um box, 1-2um deep) can now be achieved using MC-ICP-MS such that low volume ablations should be considered part of the toolbox of methods targeted at improving the reproducibility of LA-ICP-MS U-Pb geochronology. In combination with other strategies these improvements should be feasible on any ICP platform. However, reducing the volume of analysis reduces detected counts and requires a change of analytical approach in order to mitigate this. Appropriate strategies may include the use of high efficiency cell and torch technologies and the optimisation of acquisition protocols and data handling techniques such as condensing signal peaks, using log ratios and total signal integration. The tools required to break the 2% (2s) uncertainty ceiling in LA-ICP-MS U-Pb geochronology are likely now known but require a coherent strategy and change of approach to combine their implementation and realise this goal. This study will highlight these changes and efforts towards reducing the uncertainty contribution for LA-ICP-MS U-Pb geochronology.

  14. Panaceas, uncertainty, and the robust control framework in sustainability science

    PubMed Central

    Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan

    2007-01-01

    A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574

  15. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    PubMed

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  16. A comparison of optimisation methods and knee joint degrees of freedom on muscle force predictions during single-leg hop landings.

    PubMed

    Mokhtarzadeh, Hossein; Perraton, Luke; Fok, Laurence; Muñoz, Mario A; Clark, Ross; Pivonka, Peter; Bryant, Adam L

    2014-09-22

    The aim of this paper was to compare the effect of different optimisation methods and different knee joint degrees of freedom (DOF) on muscle force predictions during a single legged hop. Nineteen subjects performed single-legged hopping manoeuvres and subject-specific musculoskeletal models were developed to predict muscle forces during the movement. Muscle forces were predicted using static optimisation (SO) and computed muscle control (CMC) methods using either 1 or 3 DOF knee joint models. All sagittal and transverse plane joint angles calculated using inverse kinematics or CMC in a 1 DOF or 3 DOF knee were well-matched (RMS error<3°). Biarticular muscles (hamstrings, rectus femoris and gastrocnemius) showed more differences in muscle force profiles when comparing between the different muscle prediction approaches where these muscles showed larger time delays for many of the comparisons. The muscle force magnitudes of vasti, gluteus maximus and gluteus medius were not greatly influenced by the choice of muscle force prediction method with low normalised root mean squared errors (<48%) observed in most comparisons. We conclude that SO and CMC can be used to predict lower-limb muscle co-contraction during hopping movements. However, care must be taken in interpreting the magnitude of force predicted in the biarticular muscles and the soleus, especially when using a 1 DOF knee. Despite this limitation, given that SO is a more robust and computationally efficient method for predicting muscle forces than CMC, we suggest that SO can be used in conjunction with musculoskeletal models that have a 1 or 3 DOF knee joint to study the relative differences and the role of muscles during hopping activities in future studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Effect of intermittent feedback control on robustness of human-like postural control system

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroko; Fujii, Keisuke; Suzuki, Yasuyuki; Kouzaki, Motoki

    2016-03-01

    Humans have to acquire postural robustness to maintain stability against internal and external perturbations. Human standing has been recently modelled using an intermittent feedback control. However, the causality inside of the closed-loop postural control system associated with the neural control strategy is still unknown. Here, we examined the effect of intermittent feedback control on postural robustness and of changes in active/passive components on joint coordinative structure. We implemented computer simulation of a quadruple inverted pendulum that is mechanically close to human tiptoe standing. We simulated three pairs of joint viscoelasticity and three choices of neural control strategies for each joint: intermittent, continuous, or passive control. We examined postural robustness for each parameter set by analysing the region of active feedback gain. We found intermittent control at the hip joint was necessary for model stabilisation and model parameters affected the robustness of the pendulum. Joint sways of the pendulum model were partially smaller than or similar to those of experimental data. In conclusion, intermittent feedback control was necessary for the stabilisation of the quadruple inverted pendulum. Also, postural robustness of human-like multi-link standing would be achieved by both passive joint viscoelasticity and neural joint control strategies.

  18. Effect of intermittent feedback control on robustness of human-like postural control system.

    PubMed

    Tanabe, Hiroko; Fujii, Keisuke; Suzuki, Yasuyuki; Kouzaki, Motoki

    2016-03-02

    Humans have to acquire postural robustness to maintain stability against internal and external perturbations. Human standing has been recently modelled using an intermittent feedback control. However, the causality inside of the closed-loop postural control system associated with the neural control strategy is still unknown. Here, we examined the effect of intermittent feedback control on postural robustness and of changes in active/passive components on joint coordinative structure. We implemented computer simulation of a quadruple inverted pendulum that is mechanically close to human tiptoe standing. We simulated three pairs of joint viscoelasticity and three choices of neural control strategies for each joint: intermittent, continuous, or passive control. We examined postural robustness for each parameter set by analysing the region of active feedback gain. We found intermittent control at the hip joint was necessary for model stabilisation and model parameters affected the robustness of the pendulum. Joint sways of the pendulum model were partially smaller than or similar to those of experimental data. In conclusion, intermittent feedback control was necessary for the stabilisation of the quadruple inverted pendulum. Also, postural robustness of human-like multi-link standing would be achieved by both passive joint viscoelasticity and neural joint control strategies.

  19. Effect of intermittent feedback control on robustness of human-like postural control system

    PubMed Central

    Tanabe, Hiroko; Fujii, Keisuke; Suzuki, Yasuyuki; Kouzaki, Motoki

    2016-01-01

    Humans have to acquire postural robustness to maintain stability against internal and external perturbations. Human standing has been recently modelled using an intermittent feedback control. However, the causality inside of the closed-loop postural control system associated with the neural control strategy is still unknown. Here, we examined the effect of intermittent feedback control on postural robustness and of changes in active/passive components on joint coordinative structure. We implemented computer simulation of a quadruple inverted pendulum that is mechanically close to human tiptoe standing. We simulated three pairs of joint viscoelasticity and three choices of neural control strategies for each joint: intermittent, continuous, or passive control. We examined postural robustness for each parameter set by analysing the region of active feedback gain. We found intermittent control at the hip joint was necessary for model stabilisation and model parameters affected the robustness of the pendulum. Joint sways of the pendulum model were partially smaller than or similar to those of experimental data. In conclusion, intermittent feedback control was necessary for the stabilisation of the quadruple inverted pendulum. Also, postural robustness of human-like multi-link standing would be achieved by both passive joint viscoelasticity and neural joint control strategies. PMID:26931281

  20. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  1. Robustness analysis of interdependent networks under multiple-attacking strategies

    NASA Astrophysics Data System (ADS)

    Gao, Yan-Li; Chen, Shi-Ming; Nie, Sen; Ma, Fei; Guan, Jun-Jie

    2018-04-01

    The robustness of complex networks under attacks largely depends on the structure of a network and the nature of the attacks. Previous research on interdependent networks has focused on two types of initial attack: random attack and degree-based targeted attack. In this paper, a deliberate attack function is proposed, where six kinds of deliberate attacking strategies can be derived by adjusting the tunable parameters. Moreover, the robustness of four types of interdependent networks (BA-BA, ER-ER, BA-ER and ER-BA) with different coupling modes (random, positive and negative correlation) is evaluated under different attacking strategies. Interesting conclusions could be obtained. It can be found that the positive coupling mode can make the vulnerability of the interdependent network to be absolutely dependent on the most vulnerable sub-network under deliberate attacks, whereas random and negative coupling modes make the vulnerability of interdependent network to be mainly dependent on the being attacked sub-network. The robustness of interdependent network will be enhanced with the degree-degree correlation coefficient varying from positive to negative. Therefore, The negative coupling mode is relatively more optimal than others, which can substantially improve the robustness of the ER-ER network and ER-BA network. In terms of the attacking strategies on interdependent networks, the degree information of node is more valuable than the betweenness. In addition, we found a more efficient attacking strategy for each coupled interdependent network and proposed the corresponding protection strategy for suppressing cascading failure. Our results can be very useful for safety design and protection of interdependent networks.

  2. Robust PI and PID design for first- and second-order processes with zeros, time-delay and structured uncertainties

    NASA Astrophysics Data System (ADS)

    Parada, M.; Sbarbaro, D.; Borges, R. A.; Peres, P. L. D.

    2017-01-01

    The use of robust design techniques such as the one based on ? and ? for tuning proportional integral (PI) and proportional integral derivative (PID) controllers have been limited to address a small set of processes. This work addresses the problem by considering a wide set of possible plants, both first- and second-order continuous-time systems with time delays and zeros, leading to PI and PID controllers. The use of structured uncertainties to handle neglected dynamics allows to expand the range of processes to be considered. The proposed approach takes into account the robustness of the controller with respect to these structured uncertainties by using the small-gain theorem. In addition, improved performance is sought through the minimisation of an upper bound to the closed-loop system ? norm. A Lyapunov-Krasovskii-type functional is used to obtain delay-dependent design conditions. The controller design is accomplished by means of a convex optimisation procedure formulated using linear matrix inequalities. In order to illustrate the flexibility of the approach, several examples considering recycle compensation, reduced-order controller design and a practical implementation are addressed. Numerical experiments are provided in each case to highlight the main characteristics of the proposed design method.

  3. The multiple roles of computational chemistry in fragment-based drug design

    NASA Astrophysics Data System (ADS)

    Law, Richard; Barker, Oliver; Barker, John J.; Hesterkamp, Thomas; Godemann, Robert; Andersen, Ole; Fryatt, Tara; Courtney, Steve; Hallett, Dave; Whittaker, Mark

    2009-08-01

    Fragment-based drug discovery (FBDD) represents a change in strategy from the screening of molecules with higher molecular weights and physical properties more akin to fully drug-like compounds, to the screening of smaller, less complex molecules. This is because it has been recognised that fragment hit molecules can be efficiently grown and optimised into leads, particularly after the binding mode to the target protein has been first determined by 3D structural elucidation, e.g. by NMR or X-ray crystallography. Several studies have shown that medicinal chemistry optimisation of an already drug-like hit or lead compound can result in a final compound with too high molecular weight and lipophilicity. The evolution of a lower molecular weight fragment hit therefore represents an attractive alternative approach to optimisation as it allows better control of compound properties. Computational chemistry can play an important role both prior to a fragment screen, in producing a target focussed fragment library, and post-screening in the evolution of a drug-like molecule from a fragment hit, both with and without the available fragment-target co-complex structure. We will review many of the current developments in the area and illustrate with some recent examples from successful FBDD discovery projects that we have conducted.

  4. Optimisation of composite metallic fuel for minor actinide transmutation in an accelerator-driven system

    NASA Astrophysics Data System (ADS)

    Uyttenhove, W.; Sobolev, V.; Maschek, W.

    2011-09-01

    A potential option for neutralization of minor actinides (MA) accumulated in spent nuclear fuel of light water reactors (LWRs) is their transmutation in dedicated accelerator-driven systems (ADS). A promising fuel candidate dedicated to MA transmutation is a CERMET composite with Mo metal matrix and (Pu, Np, Am, Cm)O 2-x fuel particles. Results of optimisation studies of the CERMET fuel targeting to increasing the MA transmutation efficiency of the EFIT (European Facility for Industrial Transmutation) core are presented. In the adopted strategy of MA burning the plutonium (Pu) balance of the core is minimized, allowing a reduction in the reactivity swing and the peak power form-factor deviation and an extension of the cycle duration. The MA/Pu ratio is used as a variable for the fuel optimisation studies. The efficiency of MA transmutation is close to the foreseen theoretical value of 42 kg TW -1 h -1 when level of Pu in the actinide mixture is about 40 wt.%. The obtained results are compared with the reference case of the EFIT core loaded with the composite CERCER fuel, where fuel particles are incorporated in a ceramic magnesia matrix. The results of this study offer additional information for the EFIT fuel selection.

  5. Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Wang, Chengen

    2012-08-01

    Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.

  6. Hypertrophic scarring: the greatest unmet challenge following burn injury

    PubMed Central

    Finnerty, Celeste C; Jeschke, Marc G; Branski, Ludwik K; Barret, Juan P.; Dziewulski, Peter; Herndon, David N

    2017-01-01

    Summary Improvements in acute burn care have enabled patients to survive massive burns which would have once been fatal. Now up to 70% of patients develop hypertrophic scars following burns. The functional and psychosocial sequelae remain a major rehabilitative challenge, decreasing quality of life and delaying reintegration into society. The current approach is to optimise the healing potential of the burn wound using targeted wound care and surgery in order to minimise the development of hypertrophic scarring. This approach often fails, and modulation of established scar is continued although the optimal indication, timing, and combination of therapies have yet to be established. The need for novel treatments is paramount, and future efforts to improve outcomes and quality of life should include optimisation of wound healing to attenuate or prevent hypertrophic scarring, well-designed trials to confirm treatment efficacy, and further elucidation of molecular mechanisms to allow development of new preventative and therapeutic strategies. PMID:27707499

  7. Battery Cell Balancing Optimisation for Battery Management System

    NASA Astrophysics Data System (ADS)

    Yusof, M. S.; Toha, S. F.; Kamisan, N. A.; Hashim, N. N. W. N.; Abdullah, M. A.

    2017-03-01

    Battery cell balancing in every electrical component such as home electronic equipment and electric vehicle is very important to extend battery run time which is simplified known as battery life. The underlying solution to equalize the balance of cell voltage and SOC between the cells when they are in complete charge. In order to control and extend the battery life, the battery cell balancing is design and manipulated in such way as well as shorten the charging process. Active and passive cell balancing strategies as a unique hallmark enables the balancing of the battery with the excellent performances configuration so that the charging process will be faster. The experimental and simulation covers an analysis of how fast the battery can balance for certain time. The simulation based analysis is conducted to certify the use of optimisation in active or passive cell balancing to extend battery life for long periods of time.

  8. 3D Reconstruction of human bones based on dictionary learning.

    PubMed

    Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin

    2017-11-01

    An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. [Strategy and collaboration between medicinal chemists and pharmaceutical scientists for drug delivery systems].

    PubMed

    Mano, Takashi

    2013-01-01

    In order to successfully apply drug delivery systems (DDS) to new chemical entities (NCEs), collaboration between medicinal chemists and formulation scientists is critical for efficient drug discovery. Formulation scientists have to use 'language' that medicinal chemists understand to help promote mutual understanding, and medicinal chemists and formulation scientists have to set up strategies to use suitable DDS technologies at the discovery phase of the programmes to ensure successful transfer into the development phase. In this review, strategies of solubilisation formulation for oral delivery, inhalation delivery, nasal delivery and bioconjugation are all discussed. For example, for oral drug delivery, multiple initiatives can be proposed to improve the process to select an optimal delivery option for an NCE. From a technical perspective, formulation scientists have to explain the scope and limitations of formulations as some DDS technologies might be applicable only to limited chemical spaces. Other limitations could be the administered dose and, cost, time and resources for formulation development and manufacturing. Since DDS selection is best placed as part of lead-optimisation, formulation scientists need to be involved in discovery projects at lead selection and optimisation stages. The key to success in their collaboration is to facilitate communication between these two areas of expertise at both a strategic and scientific level. Also, it would be beneficial for medicinal chemists and formulation scientists to set common goals to improve the process of collaboration and build long term partnerships to improve DDS.

  10. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    PubMed Central

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718

  11. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    PubMed

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  12. Fallback options for airgap sensor fault of an electromagnetic suspension system

    NASA Astrophysics Data System (ADS)

    Michail, Konstantinos; Zolotas, Argyrios C.; Goodall, Roger M.

    2013-06-01

    The paper presents a method to recover the performance of an electromagnetic suspension under faulty airgap sensor. The proposed control scheme is a combination of classical control loops, a Kalman Estimator and analytical redundancy (for the airgap signal). In this way redundant airgap sensors are not essential for reliable operation of this system. When the airgap sensor fails the required signal is recovered using a combination of a Kalman estimator and analytical redundancy. The performance of the suspension is optimised using genetic algorithms and some preliminary robustness issues to load and operating airgap variations are discussed. Simulations on a realistic model of such type of suspension illustrate the efficacy of the proposed sensor tolerant control method.

  13. Discovery and process development of a novel TACE inhibitor for the topical treatment of psoriasis.

    PubMed

    Boiteau, Jean-Guy; Ouvry, Gilles; Arlabosse, Jean-Marie; Astri, Stéphanie; Beillard, Audrey; Bhurruth-Alcor, Yushma; Bonnary, Laetitia; Bouix-Peter, Claire; Bouquet, Karine; Bourotte, Marilyne; Cardinaud, Isabelle; Comino, Catherine; Deprez, Benoît; Duvert, Denis; Féret, Angélique; Hacini-Rachinel, Feriel; Harris, Craig S; Luzy, Anne-Pascale; Mathieu, Arnaud; Millois, Corinne; Orsini, Nicolas; Pascau, Jonathan; Pinto, Artur; Piwnica, David; Polge, Gaëlle; Reitz, Arnaud; Reversé, Kevin; Rodeville, Nicolas; Rossio, Patricia; Spiesse, Delphine; Tabet, Samuel; Taquet, Nathalie; Tomas, Loïc; Vial, Emmanuel; Hennequin, Laurent F

    2018-02-15

    Targeting the TNFα pathway is a validated approach to the treatment of psoriasis. In this pathway, TACE stands out as a druggable target and has been the focus of in-house research programs. In this article, we present the discovery of clinical candidate 26a. Starting from hits plagued with poor solubility or genotoxicity, 26a was identified through thorough multiparameter optimisation. Showing robust in vivo activity in an oxazolone-mediated inflammation model, the compound was selected for development. Following a polymorph screen, the hydrochloride salt was selected and the synthesis was efficiently developed to yield the API in 47% overall yield. Copyright © 2017. Published by Elsevier Ltd.

  14. Practice under pressure: what neurology can learn from anaesthesia

    PubMed Central

    Stacey, Mark

    2017-01-01

    Performing a stressful task under pressure is challenging. Strategies to optimise our training must focus on learning a skill correctly, and then practising that skill sufficiently to avoid compromising that performance in the cauldron of the clinical environment. This article discusses ways of doing things better, based on practical strategies employed in anaesthesia, but developed primarily in elite sport and the military. It involves taking a skill, practising it until it becomes a habit and over time making it part of normal behaviour. The philosophy is simple (but difficult to apply): control what you can control and always do your best. The best summary of this strategy is: learn it right, practise it right, perform it right. PMID:28972035

  15. Targeted flock/herd and individual ruminant treatment approaches.

    PubMed

    Kenyon, F; Jackson, F

    2012-05-04

    In Europe, most nematodoses are subclinical involving morbid rather than mortal effects and control is largely achieved using anthelmintics. In cattle, the genera most associated with sub-optimal performance are Ostertagia and Cooperia whereas in sheep and goats, subclinical losses are most often caused by Teladorsagia and Trichostrongylus. In some regions, at certain times, other species such as Nematodirus and Haemonchus also cause disease in sheep and goats. Unfortunately, anthelmintic resistance has now become an issue for European small ruminant producers. One of the key aims of the EU-funded PARASOL project was to identify low input and sustainable approaches to control nematode parasites in ruminants using refugia-based strategies. Two approaches to optimise anthelmintic treatments in sheep and cattle were studied; targeted treatments (TT) - whole-group treatments optimised on the basis of a marker of infection e.g. faecal egg count (FEC), and targeted selected treatment (TST) - treatments given to identified individuals to provide epidemiological and/or production benefits. A number of indicators for TT and TST were assessed to define parasitological and production-system specific indicators for treatment that best suited the regions where the PARASOL studies were conducted. These included liveweight gain, production efficiency, FEC, body condition score and diarrhoea score in small ruminants, and pepsinogen levels and Ostertagia bulk milk tank ELISA in cattle. The PARASOL studies confirmed the value of monitoring FEC as a means of targeting whole-flock treatments in small ruminants. In cattle, bulk milk tank ELISA and serum pepsinogen assays could be used retrospectively to determine the levels of exposure and hence, in the next season to optimise anthelmintic usage. TST approaches in sheep and goats examined production efficiency and liveweight gain as indicators for treatment and confirmed the value of this approach in maintaining performance and anthelmintic susceptibility in the predominant gastrointestinal nematodes. There is good evidence that the TST approach selected less heavily for the development of resistance in comparison to routine monthly treatments. Further research is required to optimise markers for TT and TST but it is also crucial to encourage producers/advisors to adapt these refugia-based strategies to maintain drug susceptible parasites in order to provide sustainable control. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Laser surface texturing of cast iron steel: dramatic edge burr reduction and high speed process optimisation for industrial production using DPSS picosecond lasers

    NASA Astrophysics Data System (ADS)

    Bruneel, David; Kearsley, Andrew; Karnakis, Dimitris

    2015-07-01

    In this work we present picosecond DPSS laser surface texturing optimisation of automotive grade cast iron steel. This application attracts great interest, particularly in the automotive industry, to reduce friction between moving piston parts in car engines, in order to decrease fuel consumption. This is accomplished by partially covering with swallow microgrooves the inner surface of a piston liner and is currently a production process adopting much longer pulse (microsecond) DPSS lasers. Lubricated interface conditions of moving parts require from the laser process to produce a very strictly controlled surface topography around the laser formed grooves, whose edge burr height must be lower than 100 nm. To achieve such a strict tolerance, laser machining of cast iron steel was investigated using an infrared DPSS picosecond laser (10ps duration) with an output power of 16W and a repetition rate of 200 kHz. The ultrashort laser is believed to provide a much better thermal management of the etching process. All studies presented here were performed on flat samples in ambient air but the process is transferrable to cylindrical geometry engine liners. We will show that reducing significantly the edge burr below an acceptable limit for lubricated engine production is possible using such lasers and remarkably the process window lies at very high irradiated fluences much higher that the single pulse ablation threshold. This detailed experimental work highlights the close relationship between the optimised laser irradiation conditions as well as the process strategy with the final size of the undesirable edge burrs. The optimised process conditions are compatible with an industrial production process and show the potential for removing extra post)processing steps (honing, etc) of cylinder liners on the manufacturing line saving time and cost.

  17. A study of lateral fall-off (penumbra) optimisation for pencil beam scanning (PBS) proton therapy

    NASA Astrophysics Data System (ADS)

    Winterhalter, C.; Lomax, A.; Oxley, D.; Weber, D. C.; Safai, S.

    2018-01-01

    The lateral fall-off is crucial for sparing organs at risk in proton therapy. It is therefore of high importance to minimize the penumbra for pencil beam scanning (PBS). Three optimisation approaches are investigated: edge-collimated uniformly weighted spots (collimation), pencil beam optimisation of uncollimated pencil beams (edge-enhancement) and the optimisation of edge collimated pencil beams (collimated edge-enhancement). To deliver energies below 70 MeV, these strategies are evaluated in combination with the following pre-absorber methods: field specific fixed thickness pre-absorption (fixed), range specific, fixed thickness pre-absorption (automatic) and range specific, variable thickness pre-absorption (variable). All techniques are evaluated by Monte Carlo simulated square fields in a water tank. For a typical air gap of 10 cm, without pre-absorber collimation reduces the penumbra only for water equivalent ranges between 4-11 cm by up to 2.2 mm. The sharpest lateral fall-off is achieved through collimated edge-enhancement, which lowers the penumbra down to 2.8 mm. When using a pre-absorber, the sharpest fall-offs are obtained when combining collimated edge-enhancement with a variable pre-absorber. For edge-enhancement and large air gaps, it is crucial to minimize the amount of material in the beam. For small air gaps however, the superior phase space of higher energetic beams can be employed when more material is used. In conclusion, collimated edge-enhancement combined with the variable pre-absorber is the recommended setting to minimize the lateral penumbra for PBS. Without collimator, it would be favourable to use a variable pre-absorber for large air gaps and an automatic pre-absorber for small air gaps.

  18. Detailed systematic analysis of recruitment strategies in randomised controlled trials in patients with an unscheduled admission to hospital

    PubMed Central

    Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M

    2018-01-01

    Objectives To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Design Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Results Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more ‘real’ RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. Conclusion There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. PMID:29420230

  19. Detailed systematic analysis of recruitment strategies in randomised controlled trials in patients with an unscheduled admission to hospital.

    PubMed

    Rowlands, Ceri; Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M

    2018-02-02

    To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more 'real' RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    NASA Astrophysics Data System (ADS)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  1. Automatic disease diagnosis using optimised weightless neural networks for low-power wearable devices

    PubMed Central

    Edla, Damodar Reddy; Kuppili, Venkatanareshbabu; Dharavath, Ramesh; Beechu, Nareshkumar Reddy

    2017-01-01

    Low-power wearable devices for disease diagnosis are used at anytime and anywhere. These are non-invasive and pain-free for the better quality of life. However, these devices are resource constrained in terms of memory and processing capability. Memory constraint allows these devices to store a limited number of patterns and processing constraint provides delayed response. It is a challenging task to design a robust classification system under above constraints with high accuracy. In this Letter, to resolve this problem, a novel architecture for weightless neural networks (WNNs) has been proposed. It uses variable sized random access memories to optimise the memory usage and a modified binary TRIE data structure for reducing the test time. In addition, a bio-inspired-based genetic algorithm has been employed to improve the accuracy. The proposed architecture is experimented on various disease datasets using its software and hardware realisations. The experimental results prove that the proposed architecture achieves better performance in terms of accuracy, memory saving and test time as compared to standard WNNs. It also outperforms in terms of accuracy as compared to conventional neural network-based classifiers. The proposed architecture is a powerful part of most of the low-power wearable devices for the solution of memory, accuracy and time issues. PMID:28868148

  2. Multi-Objective and Multidisciplinary Design Optimisation (MDO) of UAV Systems using Hierarchical Asynchronous Parallel Evolutionary Algorithms

    DTIC Science & Technology

    2007-09-17

    been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave

  3. [Treatment of pubic osteomyelitis secondary to pressure sores].

    PubMed

    Brunel, Anne-Sophie; Téot, Luc; Lamy, Brigitte; Masson, Raphaël; Morquin, David; Reynes, Jacques; Le Moing, Vincent

    2014-01-01

    There is no consensus regarding the diagnostic and therapeutic strategy for pubic osteomyelitis secondary to pelvic pressure sores. Diagnosis is often difficult and bone biopsies with microbiological and anatomical-pathological examination remain the gold standard. The rate of cicatrisation of pressure sores is low. Cleansing and negative pressure treatment are key elements of the treatment. Optimising the care management with medical-surgical collaboration is being studied in the Ostear protocol.

  4. Optimisation potential for a SBR plant based upon integrated modelling for dry and wet weather conditions.

    PubMed

    Rönner-Holm, S G E; Kaufmann Alves, I; Steinmetz, H; Holm, N C

    2009-01-01

    Integrated dynamic simulation analysis of a full-scale municipal sequential batch reactor (SBR) wastewater treatment plant (WWTP) was performed using the KOSMO pollution load simulation model for the combined sewer system (CSS) and the ASM3 + EAWAG-BioP model for the WWTP. Various optimising strategies for dry and storm weather conditions were developed to raise the purification and hydraulic performance and to reduce operation costs based on simulation studies with the calibrated WWTP model. The implementation of some strategies on the plant led to lower effluent values and an average annual saving of 49,000 euro including sewage tax, which is 22% of the total running costs. Dynamic simulation analysis of CSS for an increased WWTP influent over a period of one year showed high potentials for reducing combined sewer overflow (CSO) volume by 18-27% and CSO loads for COD by 22%, NH(4)-N and P(total) by 33%. In addition, the SBR WWTP could easily handle much higher influents without exceeding the monitoring values. During the integrated simulation of representative storm events, the total emission load for COD dropped to 90%, the sewer system emitted 47% less, whereas the pollution load in the WWTP effluent increased to only 14% with 2% higher running costs.

  5. Optimising the combination dosing strategy of abemaciclib and vemurafenib in BRAF-mutated melanoma xenograft tumours.

    PubMed

    Tate, Sonya C; Burke, Teresa F; Hartman, Daisy; Kulanthaivel, Palaniappan; Beckmann, Richard P; Cronier, Damien M

    2016-03-15

    Resistance to BRAF inhibition is a major cause of treatment failure for BRAF-mutated metastatic melanoma patients. Abemaciclib, a cyclin-dependent kinase 4 and 6 inhibitor, overcomes this resistance in xenograft tumours and offers a promising drug combination. The present work aims to characterise the quantitative pharmacology of the abemaciclib/vemurafenib combination using a semimechanistic pharmacokinetic/pharmacodynamic modelling approach and to identify an optimum dosing regimen for potential clinical evaluation. A PK/biomarker model was developed to connect abemaciclib/vemurafenib concentrations to changes in MAPK and cell cycle pathway biomarkers in A375 BRAF-mutated melanoma xenografts. Resultant tumour growth inhibition was described by relating (i) MAPK pathway inhibition to apoptosis, (ii) mitotic cell density to tumour growth and, under resistant conditions, (iii) retinoblastoma protein inhibition to cell survival. The model successfully described vemurafenib/abemaciclib-mediated changes in MAPK pathway and cell cycle biomarkers. Initial tumour shrinkage by vemurafenib, acquisition of resistance and subsequent abemaciclib-mediated efficacy were successfully captured and externally validated. Model simulations illustrate the benefit of intermittent vemurafenib therapy over continuous treatment, and indicate that continuous abemaciclib in combination with intermittent vemurafenib offers the potential for considerable tumour regression. The quantitative pharmacology of the abemaciclib/vemurafenib combination was successfully characterised and an optimised, clinically-relevant dosing strategy was identified.

  6. Addressing Climate Change in Long-Term Water Planning Using Robust Decisionmaking

    NASA Astrophysics Data System (ADS)

    Groves, D. G.; Lempert, R.

    2008-12-01

    Addressing climate change in long-term natural resource planning is difficult because future management conditions are deeply uncertain and the range of possible adaptation options are so extensive. These conditions pose challenges to standard optimization decision-support techniques. This talk will describe a methodology called Robust Decisionmaking (RDM) that can complement more traditional analytic approaches by utilizing screening-level water management models to evaluate large numbers of strategies against a wide range of plausible future scenarios. The presentation will describe a recent application of the methodology to evaluate climate adaptation strategies for the Inland Empire Utilities Agency in Southern California. This project found that RDM can provide a useful way for addressing climate change uncertainty and identify robust adaptation strategies.

  7. Enhanced Attitude Control Experiment for SSTI Lewis Spacecraft

    NASA Technical Reports Server (NTRS)

    Maghami, Peoman G.

    1997-01-01

    The enhanced attitude control system experiment is a technology demonstration experiment on the NASA's small spacecraft technology initiative program's Lewis spacecraft to evaluate advanced attitude control strategies. The purpose of the enhanced attitude control system experiment is to evaluate the feasibility of designing and implementing robust multi-input/multi-output attitude control strategies for enhanced pointing performance of spacecraft to improve the quality of the measurements of the science instruments. Different control design strategies based on modern and robust control theories are being considered for the enhanced attitude control system experiment. This paper describes the experiment as well as the design and synthesis of a mixed H(sub 2)/H(sub infinity) controller for attitude control. The control synthesis uses a nonlinear programming technique to tune the controller parameters and impose robustness and performance constraints. Simulations are carried out to demonstrate the feasibility of the proposed attitude control design strategy. Introduction

  8. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    NASA Astrophysics Data System (ADS)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  9. An assessment of multimodal imaging of subsurface text in mummy cartonnage using surrogate papyrus phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe

    Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less

  10. An assessment of multimodal imaging of subsurface text in mummy cartonnage using surrogate papyrus phantoms

    DOE PAGES

    Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe; ...

    2018-02-26

    Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less

  11. New machine-learning algorithms for prediction of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Mandal, Indrajit; Sairam, N.

    2014-03-01

    This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.

  12. The road map towards providing a robust Raman spectroscopy-based cancer diagnostic platform and integration into clinic

    NASA Astrophysics Data System (ADS)

    Lau, Katherine; Isabelle, Martin; Lloyd, Gavin R.; Old, Oliver; Shepherd, Neil; Bell, Ian M.; Dorney, Jennifer; Lewis, Aaran; Gaifulina, Riana; Rodriguez-Justo, Manuel; Kendall, Catherine; Stone, Nicolas; Thomas, Geraint; Reece, David

    2016-03-01

    Despite the demonstrated potential as an accurate cancer diagnostic tool, Raman spectroscopy (RS) is yet to be adopted by the clinic for histopathology reviews. The Stratified Medicine through Advanced Raman Technologies (SMART) consortium has begun to address some of the hurdles in its adoption for cancer diagnosis. These hurdles include awareness and acceptance of the technology, practicality of integration into the histopathology workflow, data reproducibility and availability of transferrable models. We have formed a consortium, in joint efforts, to develop optimised protocols for tissue sample preparation, data collection and analysis. These protocols will be supported by provision of suitable hardware and software tools to allow statistically sound classification models to be built and transferred for use on different systems. In addition, we are building a validated gastrointestinal (GI) cancers model, which can be trialled as part of the histopathology workflow at hospitals, and a classification tool. At the end of the project, we aim to deliver a robust Raman based diagnostic platform to enable clinical researchers to stage cancer, define tumour margin, build cancer diagnostic models and discover novel disease bio markers.

  13. Mixed H2/H∞ distributed robust model predictive control for polytopic uncertain systems subject to actuator saturation and missing measurements

    NASA Astrophysics Data System (ADS)

    Song, Yan; Fang, Xiaosheng; Diao, Qingda

    2016-03-01

    In this paper, we discuss the mixed H2/H∞ distributed robust model predictive control problem for polytopic uncertain systems subject to randomly occurring actuator saturation and packet loss. The global system is decomposed into several subsystems, and all the subsystems are connected by a fixed topology network, which is the definition for the packet loss among the subsystems. To better use the successfully transmitted information via Internet, both the phenomena of actuator saturation and packet loss resulting from the limitation of the communication bandwidth are taken into consideration. A novel distributed controller model is established to account for the actuator saturation and packet loss in a unified representation by using two sets of Bernoulli distributed white sequences with known conditional probabilities. With the nonlinear feedback control law represented by the convex hull of a group of linear feedback laws, the distributed controllers for subsystems are obtained by solving an linear matrix inequality (LMI) optimisation problem. Finally, numerical studies demonstrate the effectiveness of the proposed techniques.

  14. Turning science on robust cattle into improved genetic selection decisions.

    PubMed

    Amer, P R

    2012-04-01

    More robust cattle have the potential to increase farm profitability, improve animal welfare, reduce the contribution of ruminant livestock to greenhouse gas emissions and decrease the risk of food shortages in the face of increased variability in the farm environment. Breeding is a powerful tool for changing the robustness of cattle; however, insufficient recording of breeding goal traits and selection of animals at younger ages tend to favour genetic change in productivity traits relative to robustness traits. This paper has extended a previously proposed theory of artificial evolution to demonstrate, using deterministic simulation, how choice of breeding scheme design can be used as a tool to manipulate the direction of genetic progress, whereas the breeding goal remains focussed on the factors motivating individual farm decision makers. Particular focus was placed on the transition from progeny testing or mass selection to genomic selection breeding strategies. Transition to genomic selection from a breeding strategy where candidates are selected before records from progeny being available was shown to be highly likely to favour genetic progress in robustness traits relative to productivity traits. This was shown even with modest numbers of animals available for training and when heritability for robustness traits was only slightly lower than that for productivity traits. When transitioning from progeny testing to a genomic selection strategy without progeny testing, it was shown that there is a significant risk that robustness traits could become less influential in selection relative to productivity traits. Augmentations of training populations using genotyped cows and support for industry-wide improvements in phenotypic recording of robustness traits were put forward as investment opportunities for stakeholders wishing to facilitate the application of science on robust cattle into improved genetic selection schemes.

  15. Quantum Communications Systems

    DTIC Science & Technology

    2012-09-21

    metrology practical. The strategy was to develop robust photonic quantum states and sensors serving as an archetype for loss-tolerant information...communications and metrology. Our strategy consisted of developing robust photonic quantum states and sensors serving as an archetype for loss-tolerant...developed atomic memories in caesium vapour, based on a stimulated Raman transition, that have demonstrated a TBP greater than 1000 and are uniquely suited

  16. Vaccine strategies: Optimising outcomes.

    PubMed

    Hardt, Karin; Bonanni, Paolo; King, Susan; Santos, Jose Ignacio; El-Hodhod, Mostafa; Zimet, Gregory D; Preiss, Scott

    2016-12-20

    Successful immunisation programmes generally result from high vaccine effectiveness and adequate uptake of vaccines. In the development of new vaccination strategies, the structure and strength of the local healthcare system is a key consideration. In high income countries, existing infrastructures are usually used, while in less developed countries, the capacity for introducing new vaccines may need to be strengthened, particularly for vaccines administered beyond early childhood, such as the measles or human papillomavirus (HPV) vaccine. Reliable immunisation service funding is another important factor and low income countries often need external supplementary sources of finance. Many regions also obtain support in generating an evidence base for vaccination via initiatives created by organisations including World Health Organization (WHO), the Pan American Health Organization (PAHO), the Agence de Médecine Préventive and the Sabin Vaccine Institute. Strong monitoring and surveillance mechanisms are also required. An example is the efficient and low-cost approaches for measuring the impact of the hepatitis B control initiative and evaluating achievement of goals that have been established in the WHO Western Pacific region. A review of implementation strategies reveals differing degrees of success. For example, in the Americas, PAHO advanced a measles-mumps-rubella vaccine strategy, targeting different population groups in mass, catch-up and follow-up vaccination campaigns. This has had much success but coverage data from some parts of the region suggest that children are still not receiving all appropriate vaccines, highlighting problems with local service infrastructures. Stark differences in coverage levels are also observed among high income countries, as is the case with HPV vaccine implementation in the USA versus the UK and Australia, reflecting differences in delivery settings. Experience and research have shown which vaccine strategies work well and the factors that encourage success, which often include strong support from government and healthcare organisations, as well as tailored, culturally-appropriate local approaches to optimise outcomes. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Feedback control methods for drug dosage optimisation. Concepts, classification and clinical application.

    PubMed

    Vozeh, S; Steimer, J L

    1985-01-01

    The concept of feedback control methods for drug dosage optimisation is described from the viewpoint of control theory. The control system consists of 5 parts: (a) patient (the controlled process); (b) response (the measured feedback); (c) model (the mathematical description of the process); (d) adaptor (to update the parameters); and (e) controller (to determine optimum dosing strategy). In addition to the conventional distinction between open-loop and closed-loop control systems, a classification is proposed for dosage optimisation techniques which distinguishes between tight-loop and loose-loop methods depending on whether physician's interaction is absent or included as part of the control step. Unlike engineering problems where the process can usually be controlled by fully automated devices, therapeutic situations often require that the physician be included in the decision-making process to determine the 'optimal' dosing strategy. Tight-loop and loose-loop methods can be further divided into adaptive and non-adaptive, depending on the presence of the adaptor. The main application areas of tight-loop feedback control methods are general anaesthesia, control of blood pressure, and insulin delivery devices. Loose-loop feedback methods have been used for oral anticoagulation and in therapeutic drug monitoring. The methodology, advantages and limitations of the different approaches are reviewed. A general feature common to all application areas could be observed: to perform well under routine clinical conditions, which are characterised by large interpatient variability and sometimes also intrapatient changes, control systems should be adaptive. Apart from application in routine drug treatment, feedback control methods represent an important research tool. They can be applied for the investigation of pathophysiological and pharmacodynamic processes. A most promising application is the evaluation of the relationship between an intermediate response (e.g. drug level), which is often used as feedback for dosage adjustment, and the final therapeutic goal.

  18. Improved packing of protein side chains with parallel ant colonies.

    PubMed

    Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie

    2014-01-01

    The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.

  19. Local pursuit strategy-inspired cooperative trajectory planning algorithm for a class of nonlinear constrained dynamical systems

    NASA Astrophysics Data System (ADS)

    Xu, Yunjun; Remeikas, Charles; Pham, Khanh

    2014-03-01

    Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.

  20. Optimising resource management in neurorehabilitation.

    PubMed

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  1. An optimisation methodology of artificial neural network models for predicting solar radiation: a case study

    NASA Astrophysics Data System (ADS)

    Rezrazi, Ahmed; Hanini, Salah; Laidi, Maamar

    2016-02-01

    The right design and the high efficiency of solar energy systems require accurate information on the availability of solar radiation. Due to the cost of purchase and maintenance of the radiometers, these data are not readily available. Therefore, there is a need to develop alternative ways of generating such data. Artificial neural networks (ANNs) are excellent and effective tools for learning, pinpointing or generalising data regularities, as they have the ability to model nonlinear functions; they can also cope with complex `noisy' data. The main objective of this paper is to show how to reach an optimal model of ANNs for applying in prediction of solar radiation. The measured data of the year 2007 in Ghardaïa city (Algeria) are used to demonstrate the optimisation methodology. The performance evaluation and the comparison of results of ANN models with measured data are made on the basis of mean absolute percentage error (MAPE). It is found that MAPE in the ANN optimal model reaches 1.17 %. Also, this model yields a root mean square error (RMSE) of 14.06 % and an MBE of 0.12. The accuracy of the outputs exceeded 97 % and reached up 99.29 %. Results obtained indicate that the optimisation strategy satisfies practical requirements. It can successfully be generalised for any location in the world and be used in other fields than solar radiation estimation.

  2. Lung protective ventilation strategies in paediatrics-A review.

    PubMed

    Jauncey-Cooke, Jacqui I; Bogossian, Fiona; East, Chris E

    2010-05-01

    Ventilator Associated Lung Injury (VALI) is an iatrogenic phenomena that significantly impacts on the morbidity and mortality of critically ill patients. The hazards associated with mechanical ventilation are becoming increasingly understood courtesy of a large body of research. Barotrauma, volutrauma and biotrauma all play a role in VALI. Concomitant to this growth in understanding is the development of strategies to reduce the deleterious impact of mechanical ventilation. The majority of the research is based upon adult populations but with careful extrapolation this review will focus on paediatrics. This review article describes the physiological basis of VALI and discusses the various lung protective strategies that clinicians can employ to minimise its incidence and optimise outcomes for paediatric patients. Copyright 2009 Australian College of Critical Care Nurses Ltd. All rights reserved.

  3. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.

  4. Principles and Applications of Liquid Chromatography-Mass Spectrometry in Clinical Biochemistry

    PubMed Central

    Pitt, James J

    2009-01-01

    Liquid chromatography-mass spectrometry (LC-MS) is now a routine technique with the development of electrospray ionisation (ESI) providing a simple and robust interface. It can be applied to a wide range of biological molecules and the use of tandem MS and stable isotope internal standards allows highly sensitive and accurate assays to be developed although some method optimisation is required to minimise ion suppression effects. Fast scanning speeds allow a high degree of multiplexing and many compounds can be measured in a single analytical run. With the development of more affordable and reliable instruments, LC-MS is starting to play an important role in several areas of clinical biochemistry and compete with conventional liquid chromatography and other techniques such as immunoassay. PMID:19224008

  5. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    PubMed

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  6. Trans-dimensional Bayesian inversion of airborne electromagnetic data for 2D conductivity profiles

    NASA Astrophysics Data System (ADS)

    Hawkins, Rhys; Brodie, Ross C.; Sambridge, Malcolm

    2018-02-01

    This paper presents the application of a novel trans-dimensional sampling approach to a time domain airborne electromagnetic (AEM) inverse problem to solve for plausible conductivities of the subsurface. Geophysical inverse field problems, such as time domain AEM, are well known to have a large degree of non-uniqueness. Common least-squares optimisation approaches fail to take this into account and provide a single solution with linearised estimates of uncertainty that can result in overly optimistic appraisal of the conductivity of the subsurface. In this new non-linear approach, the spatial complexity of a 2D profile is controlled directly by the data. By examining an ensemble of proposed conductivity profiles it accommodates non-uniqueness and provides more robust estimates of uncertainties.

  7. A robust detector for rolling element bearing condition monitoring based on the modulation signal bispectrum and its performance evaluation against the Kurtogram

    NASA Astrophysics Data System (ADS)

    Tian, Xiange; Xi Gu, James; Rehab, Ibrahim; Abdalla, Gaballa M.; Gu, Fengshou; Ball, A. D.

    2018-02-01

    Envelope analysis is a widely used method for rolling element bearing fault detection. To obtain high detection accuracy, it is critical to determine an optimal frequency narrowband for the envelope demodulation. However, many of the schemes which are used for the narrowband selection, such as the Kurtogram, can produce poor detection results because they are sensitive to random noise and aperiodic impulses which normally occur in practical applications. To achieve the purposes of denoising and frequency band optimisation, this paper proposes a novel modulation signal bispectrum (MSB) based robust detector for bearing fault detection. Because of its inherent noise suppression capability, the MSB allows effective suppression of both stationary random noise and discrete aperiodic noise. The high magnitude features that result from the use of the MSB also enhance the modulation effects of a bearing fault and can be used to provide optimal frequency bands for fault detection. The Kurtogram is generally accepted as a powerful means of selecting the most appropriate frequency band for envelope analysis, and as such it has been used as the benchmark comparator for performance evaluation in this paper. Both simulated and experimental data analysis results show that the proposed method produces more accurate and robust detection results than Kurtogram based approaches for common bearing faults under a range of representative scenarios.

  8. Resource Planning for Massive Number of Process Instances

    NASA Astrophysics Data System (ADS)

    Xu, Jiajie; Liu, Chengfei; Zhao, Xiaohui

    Resource allocation has been recognised as an important topic for business process execution. In this paper, we focus on planning resources for a massive number of process instances to meet the process requirements and cater for rational utilisation of resources before execution. After a motivating example, we present a model for planning resources for process instances. Then we design a set of heuristic rules that take both optimised planning at build time and instance dependencies at run time into account. Based on these rules we propose two strategies, one is called holistic and the other is called batched, for resource planning. Both strategies target a lower cost, however, the holistic strategy can achieve an earlier deadline while the batched strategy aims at rational use of resources. We discuss how to find balance between them in the paper with a comprehensive experimental study on these two approaches.

  9. A Robust Design Methodology for Optimal Microscale Secondary Flow Control in Compact Inlet Diffusers

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Keller, Dennis J.

    2001-01-01

    It is the purpose of this study to develop an economical Robust design methodology for microscale secondary flow control in compact inlet diffusers. To illustrate the potential of economical Robust Design methodology, two different mission strategies were considered for the subject inlet, namely Maximum Performance and Maximum HCF Life Expectancy. The Maximum Performance mission maximized total pressure recovery while the Maximum HCF Life Expectancy mission minimized the mean of the first five Fourier harmonic amplitudes, i.e., 'collectively' reduced all the harmonic 1/2 amplitudes of engine face distortion. Each of the mission strategies was subject to a low engine face distortion constraint, i.e., DC60<0.10, which is a level acceptable for commercial engines. For each of these missions strategies, an 'Optimal Robust' (open loop control) and an 'Optimal Adaptive' (closed loop control) installation was designed over a twenty degree angle-of-incidence range. The Optimal Robust installation used economical Robust Design methodology to arrive at a single design which operated over the entire angle-of-incident range (open loop control). The Optimal Adaptive installation optimized all the design parameters at each angle-of-incidence. Thus, the Optimal Adaptive installation would require a closed loop control system to sense a proper signal for each effector and modify that effector device, whether mechanical or fluidic, for optimal inlet performance. In general, the performance differences between the Optimal Adaptive and Optimal Robust installation designs were found to be marginal. This suggests, however, that Optimal Robust open loop installation designs can be very competitive with Optimal Adaptive close loop designs. Secondary flow control in inlets is inherently robust, provided it is optimally designed. Therefore, the new methodology presented in this paper, combined array 'Lower Order' approach to Robust DOE, offers the aerodynamicist a very viable and economical way of exploring the concept of Robust inlet design, where the mission variables are brought directly into the inlet design process and insensitivity or robustness to the mission variables becomes a design objective.

  10. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  11. Development of a core outcome set for effectiveness trials aimed at optimising prescribing in older adults in care homes.

    PubMed

    Millar, Anna N; Daffu-O'Reilly, Amrit; Hughes, Carmel M; Alldred, David P; Barton, Garry; Bond, Christine M; Desborough, James A; Myint, Phyo K; Holland, Richard; Poland, Fiona M; Wright, David

    2017-04-12

    Prescribing medicines for older adults in care homes is known to be sub-optimal. Whilst trials testing interventions to optimise prescribing in this setting have been published, heterogeneity in outcome reporting has hindered comparison of interventions, thus limiting evidence synthesis. The aim of this study was to develop a core outcome set (COS), a list of outcomes which should be measured and reported, as a minimum, for all effectiveness trials involving optimising prescribing in care homes. The COS was developed as part of the Care Homes Independent Pharmacist Prescribing Study (CHIPPS). A long-list of outcomes was identified through a review of published literature and stakeholder input. Outcomes were reviewed and refined prior to entering a two-round online Delphi exercise and then distributed via a web link to the CHIPPS Management Team, a multidisciplinary team including pharmacists, doctors and Patient Public Involvement representatives (amongst others), who comprised the Delphi panel. The Delphi panellists (n = 19) rated the importance of outcomes on a 9-point Likert scale from 1 (not important) to 9 (critically important). Consensus for an outcome being included in the COS was defined as ≥70% participants scoring 7-9 and <15% scoring 1-3. Exclusion was defined as ≥70% scoring 1-3 and <15% 7-9. Individual and group scores were fed back to participants alongside the second questionnaire round, which included outcomes for which no consensus had been achieved. A long-list of 63 potential outcomes was identified. Refinement of this long-list of outcomes resulted in 29 outcomes, which were included in the Delphi questionnaire (round 1). Following both rounds of the Delphi exercise, 13 outcomes (organised into seven overarching domains: medication appropriateness, adverse drug events, prescribing errors, falls, quality of life, all-cause mortality and admissions to hospital (and associated costs)) met the criteria for inclusion in the final COS. We have developed a COS for effectiveness trials aimed at optimising prescribing in older adults in care homes using robust methodology. Widespread adoption of this COS will facilitate evidence synthesis between trials. Future work should focus on evaluating appropriate tools for these key outcomes to further reduce heterogeneity in outcome measurement in this context.

  12. Stochastic optimisation of water allocation on a global scale

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.

    2014-05-01

    Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.

  13. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  14. Compromise-based Robust Prioritization of Climate Change Adaptation Strategies for Watershed Management

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Chung, E. S.

    2014-12-01

    This study suggests a robust prioritization framework for climate change adaptation strategies under multiple climate change scenarios with a case study of selecting sites for reusing treated wastewater (TWW) in a Korean urban watershed. The framework utilizes various multi-criteria decision making techniques, including the VIKOR method and the Shannon entropy-based weights. In this case study, the sustainability of TWW use is quantified with indicator-based approaches with the DPSIR framework, which considers both hydro-environmental and socio-economic aspects of the watershed management. Under the various climate change scenarios, the hydro-environmental responses to reusing TWW in potential alternative sub-watersheds are determined using the Hydrologic Simulation Program in Fortran (HSPF). The socio-economic indicators are obtained from the statistical databases. Sustainability scores for multiple scenarios are estimated individually and then integrated with the proposed approach. At last, the suggested framework allows us to prioritize adaptation strategies in a robust manner with varying levels of compromise between utility-based and regret-based strategies.

  15. Optimisation of driver actions in RWD race car including tyre thermodynamics

    NASA Astrophysics Data System (ADS)

    Maniowski, Michal

    2016-04-01

    The paper presents an innovative method for a lap time minimisation by using genetic algorithms for a multi objective optimisation of a race driver-vehicle model. The decision variables consist of 16 parameters responsible for actions of a professional driver (e.g. time traces for brake, accelerator and steering wheel) on a race track part with RH corner. Purpose-built, high fidelity, multibody vehicle model (called 'miMa') is described by 30 generalised coordinates and 440 parameters, crucial in motorsport. Focus is put on modelling of the tyre tread thermodynamics and its influence on race vehicle dynamics. Numerical example considers a Rear Wheel Drive BMW E36 prepared for track day events. In order to improve the section lap time (by 5%) and corner exit velocity (by 4%) a few different driving strategies are found depending on thermal conditions of semi-slick tyres. The process of the race driver adaptation to initially cold or hot tyres is explained.

  16. Geo-information processing service composition for concurrent tasks: A QoS-aware game theory approach

    NASA Astrophysics Data System (ADS)

    Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong

    2012-10-01

    Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.

  17. Design and development of vitamin C-encapsulated proliposome with improved in-vitro and ex-vivo antioxidant efficacy.

    PubMed

    Parhizkar, Elahehnaz; Rashedinia, Marzieh; Karimi, Maryam; Alipour, Shohreh

    2018-06-06

    Vitamin C, as an antioxidant additive in pharmaceutical and food products, is susceptible to environmental conditions, and new design strategies are needed to enhance its stability. The aim of this study is to prepare vitamin C proliposome using film deposition on the carrier by applying different factors, and optimise the characteristics of the obtained powder using the design expert ® software. The optimised formulation demonstrated acceptable flowability with 20% vitamin C loading. This formulation released about 90% vitamin C within 2 h and showed higher (1.7-fold) in-vitro antioxidant activity. Ex-vivo antioxidant activity was 1.9 and 1.6 times higher in brain and liver cells, respectively. A 27% reduction in malondialdehyde (MDA) level of liver cell was obtained comparing free vitamin C. Therefore, this study results suggest that the vitamin C-encapsulated proliposome powder might be an appropriate carrier for oral drug delivery of vitamin C with better antioxidant efficacy.

  18. Social responsibility: a new paradigm of hospital governance?

    PubMed

    Brandão, Cristina; Rego, Guilhermina; Duarte, Ivone; Nunes, Rui

    2013-12-01

    Changes in modern societies originate the perception that ethical behaviour is essential in organization's practices especially in the way they deal with aspects such as human rights. These issues are usually under the umbrella of the concept of social responsibility. Recently the Report of the International Bioethics Committee of UNESCO on Social Responsibility and Health has addressed this concept of social responsibility in the context of health care delivery suggesting a new paradigm in hospital governance. The objective of this paper is to address the issue of corporate social responsibility in health care, namely in the hospital setting, emphasising the special governance arrangements of such complex organisations and to evaluate if new models of hospital management (entrepreneurism) will need robust mechanisms of corporate governance to fulfil its social responsiveness. The scope of this responsible behaviour requires hospitals to fulfil its social and market objectives, in accordance to the law and general ethical standards. Social responsibility includes aspects like abstention of harm to the environment or the protection of the interests of all the stakeholders enrolled in the deliverance of health care. In conclusion, adequate corporate governance and corporate strategy are the gold standard of social responsibility. In a competitive market hospital governance will be optimised if the organization culture is reframed to meet stakeholders' demands for unequivocal assurances on ethical behaviour. Health care organizations should abide to this new governance approach that is to create organisation value through performance, conformance and responsibility.

  19. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    PubMed

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Twenty-five years of breast-feeding research in Midwifery.

    PubMed

    Dykes, Fiona

    2011-02-01

    This paper explores some of the significant changes that have taken place with regard to the protection, promotion and support of breast feeding during the past three decades. The period covered since the first issue of Midwifery in 1985, has been marked by some dramatic reversals of harmful discourses and detrimental practices with regard to infant and young child feeding and more specifically breast feeding. Midwifery has spanned this period with the publication of 80 papers on breast feeding. This collection of papers has both influenced and reflected upon changes in international and national breast-feeding strategies and practices. Six papers have been selected for a special virtual edition of Midwifery to reflect the diversity of breast-feeding research in terms of issues explored, methodology and country of origin (www.midwiferyjournal.com). Considerable progress is reflected in these papers. However, there are still enormous challenges ahead in working towards the optimisation of infant and young child feeding. In addition to continuing to conduct and collate robust scientific and epidemiological research we need further studies that explore the political, economic, socio-cultural and psychological factors influencing women's infant feeding practices. Our professional practice needs to continue to improve in order to provide women and families with appropriate support, encouragement and resources to enable them to breastfeed effectively. Finally, we need to continue to challenge the systems and approaches at organisational and community levels that impede women in their endeavours to feed their infants in optimum ways. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Evaluation of a novel semi-automated HPLC procedure for whole blood cyclosporin A confirms equivalence to adjusted monoclonal values from Abbott TDx.

    PubMed

    Roberts, Norman B; Dutton, John; Higgins, Gerald; Allars, Lesley

    2005-01-01

    The problem in the measurement of cyclosporin (CyA) is that the widely used immuno-based assays suffer from interference by metabolites present in unpredictable excess. To resolve this, the consensus view has been to develop more specific and robust procedures for the measurement of CyA alone in order to give values similar to those obtained by HPLC. We developed an alternative strategy based on Abbott poly- and monoclonal assays to derive an adjusted monoclonal value as an equivalent measurement to HPLC. We have now evaluated a recently developed semi-automated HPLC procedure and used it to test the validity of the adjusted monoclonal value. The automated HPLC procedure with online clean-up was optimised for the separation of CyA and internal standard CyD. The assay was simple to use, precise and gave good recovery of cyclosporin from whole blood. Comparisons with the more specific immunoassays Abbott AxSym and EMIT showed close agreement, whereas Abbott monoclonal values indicated up to 20% positive bias. In contrast, the adjusted monoclonal values gave good agreement with HPLC. Data obtained from HPLC linked to tandem mass spectrometry (MS) indicated closer agreement with Abbott monoclonal values than expected, suggesting some positive bias with MS. The benefit of using an adjusted monoclonal value is that a result equivalent to HPLC is obtained, as well as an indication of the concentration of metabolites from the Abbott polyclonal measurement.

  2. Scalar production and decay to top quarks including interference effects at NLO in QCD in an EFT approach

    DOE PAGES

    Franzosi, Diogo Buarque; Vryonidou, Eleni; Zhang, Cen

    2017-10-13

    Scalar and pseudo-scalar resonances decaying to top quarks are common predictions in several scenarios beyond the standard model (SM) and are extensively searched for by LHC experiments. Challenges on the experimental side require optimising the strategy based on accurate predictions. Firstly, QCD corrections are known to be large both for the SM QCD background and for the pure signal scalar production. Secondly, leading order and approximate next-to-leading order (NLO) calculations indicate that the interference between signal and background is large and drastically changes the lineshape of the signal, from a simple peak to a peak-dip structure. Therefore, a robust predictionmore » of this interference at NLO accuracy in QCD is necessary to ensure that higher-order corrections do not alter the lineshapes. We compute the exact NLO corrections, assuming a point-like coupling between the scalar and the gluons and consistently embedding the calculation in an effective field theory within an automated framework, and present results for a representative set of beyond the SM benchmarks. The results can be further matched to parton shower simulation, providing more realistic predictions. We find that NLO corrections are important and lead to a significant reduction of the uncertainties. We also discuss how our computation can be used to improve the predictions for physics scenarios where the gluon-scalar loop is resolved and the effective approach is less applicable.« less

  3. Miniature high-resolution guided-wave spectrometer for atmospheric remote sensing

    NASA Astrophysics Data System (ADS)

    Sloan, James; Kruzelecky, Roman; Wong, Brian; Zou, Jing; Jamroz, Wes; Haddad, Emile; Poirier, Michel

    This paper describes the design and application of an innovative spectrometer in which a guided-wave integrated optical spectrometer (IOSPEC) has been coupled with a Fabry-Perot (FP) interferometer. This miniature spectrometer has a net mass under 3 kg, but is capable of broadband operation at spectral resolutions below 0.03 nm full width half maximum (FWHM). The tuneable FP filter provides very high spectral resolution combined with a large input aper-ture. The solid state guided-wave spectrometer is currently configured for a 512-channel array detector, which provides sub-nm coarse resolution. The ultimate resolution is determined by the FP filter, which is tuned across the desired spectral bands, thereby providing a signal-to-noise ratio (SNR) advantage over scanned spectrometer systems of the square root of the number of detector channels. The guided-wave optics provides robust, long-term optical alignment, while minimising the mechanical complexity. The miniaturisation of the FP-IOSPEC spectrometer allows multiple spectrometers to be accommodated on a single MicroSat. Each of these can be optimised for selected measurement tasks and views, thereby enabling more flexible data acquisition strategies with enhanced information content, while minimizing the mission cost. The application of this innovative technology in the proposed Miniature Earth Observation Satellite (MEOS) mission will also be discussed. The MEOS mission, which is designed for the investigation of the carbon and water cycles, relies on multiple IO-SPEC instruments for the simultaneous measurement of a range of atmospheric and surface properties important to climate change.

  4. Scalar production and decay to top quarks including interference effects at NLO in QCD in an EFT approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franzosi, Diogo Buarque; Vryonidou, Eleni; Zhang, Cen

    Scalar and pseudo-scalar resonances decaying to top quarks are common predictions in several scenarios beyond the standard model (SM) and are extensively searched for by LHC experiments. Challenges on the experimental side require optimising the strategy based on accurate predictions. Firstly, QCD corrections are known to be large both for the SM QCD background and for the pure signal scalar production. Secondly, leading order and approximate next-to-leading order (NLO) calculations indicate that the interference between signal and background is large and drastically changes the lineshape of the signal, from a simple peak to a peak-dip structure. Therefore, a robust predictionmore » of this interference at NLO accuracy in QCD is necessary to ensure that higher-order corrections do not alter the lineshapes. We compute the exact NLO corrections, assuming a point-like coupling between the scalar and the gluons and consistently embedding the calculation in an effective field theory within an automated framework, and present results for a representative set of beyond the SM benchmarks. The results can be further matched to parton shower simulation, providing more realistic predictions. We find that NLO corrections are important and lead to a significant reduction of the uncertainties. We also discuss how our computation can be used to improve the predictions for physics scenarios where the gluon-scalar loop is resolved and the effective approach is less applicable.« less

  5. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  6. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2012-12-01

    Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. So, it is proposed to virtually augment it by 25, 50, 100 and 160% which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.

  7. Experimental Test Rig for Optimal Control of Flexible Space Robotic Arms

    DTIC Science & Technology

    2016-12-01

    was used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link...used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link... designed to perform the experimentation . The first and second concepts use traditional elastic springs in varying configurations while a third uses a

  8. Nonlinear robust controller design for multi-robot systems with unknown payloads

    NASA Technical Reports Server (NTRS)

    Song, Y. D.; Anderson, J. N.; Homaifar, A.; Lai, H. Y.

    1992-01-01

    This work is concerned with the control problem of a multi-robot system handling a payload with unknown mass properties. Force constraints at the grasp points are considered. Robust control schemes are proposed that cope with the model uncertainty and achieve asymptotic path tracking. To deal with the force constraints, a strategy for optimally sharing the task is suggested. This strategy basically consists of two steps. The first detects the robots that need help and the second arranges that help. It is shown that the overall system is not only robust to uncertain payload parameters, but also satisfies the force constraints.

  9. Generalizing Automated Detection of the Robustness of Student Learning in an Intelligent Tutor for Genetics

    ERIC Educational Resources Information Center

    Baker, Ryan S. J. d.; Corbett, Albert T.; Gowda, Sujith M.

    2013-01-01

    Recently, there has been growing emphasis on supporting robust learning within intelligent tutoring systems, assessed by measures such as transfer to related skills, preparation for future learning, and longer term retention. It has been shown that different pedagogical strategies promote robust learning to different degrees. However, the student…

  10. Robust Airfoil Optimization in High Resolution Design Space

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon L.

    2003-01-01

    The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.

  11. [Improving pre- and perioperative hospital care : Major elective surgery].

    PubMed

    Punt, Ilona M; van der Most, Roel; Bongers, Bart C; Didden, Anouk; Hulzebos, Erik H J; Dronkers, Jaap J; van Meeteren, Nico L U

    2017-04-01

    Surgery is aimed at improving a patient's health. However, surgery is plagued with a risk of negative consequences, such as perioperative complications and prolonged hospitalization. Also, achieving preoperative levels of physical functionality may be delayed. Above all, the "waiting" period before the operation and the period of hospitalisation endanger the state of health, especially in frail patients.The Better in Better out™ (BiBo™) strategy is aimed at reducing the risk of a complicated postoperative course through the optimisation and professionalisation of perioperative treatment strategies in a physiotherapy activating context. BiBo™ includes four steps towards optimising personalised health care in patients scheduled for elective surgery: 1) preoperative risk assessment, 2) preoperative patient education, 3) preoperative exercise therapy for high-risk patients (prehabilitation) and 4) postoperative mobilisation and functional exercise therapy.Preoperative screening is aimed at identifying frail, high-risk patients at an early stage, and advising these high-risk patients to participate in outpatient exercise training (prehabilitation) as soon as possible. By improving preoperative physical fitness, a patient is able to better withstand the impact of major surgery and this will lead to both a reduced risk of negative side effects and better short-term outcomes as a result. Besides prehabilitation, treatment culture and infrastructure should be inherently changing in such a way that patients stay as active as they can, socially, mentally and physically after discharge.

  12. Optimising the combination dosing strategy of abemaciclib and vemurafenib in BRAF-mutated melanoma xenograft tumours

    PubMed Central

    Tate, Sonya C; Burke, Teresa F; Hartman, Daisy; Kulanthaivel, Palaniappan; Beckmann, Richard P; Cronier, Damien M

    2016-01-01

    Background: Resistance to BRAF inhibition is a major cause of treatment failure for BRAF-mutated metastatic melanoma patients. Abemaciclib, a cyclin-dependent kinase 4 and 6 inhibitor, overcomes this resistance in xenograft tumours and offers a promising drug combination. The present work aims to characterise the quantitative pharmacology of the abemaciclib/vemurafenib combination using a semimechanistic pharmacokinetic/pharmacodynamic modelling approach and to identify an optimum dosing regimen for potential clinical evaluation. Methods: A PK/biomarker model was developed to connect abemaciclib/vemurafenib concentrations to changes in MAPK and cell cycle pathway biomarkers in A375 BRAF-mutated melanoma xenografts. Resultant tumour growth inhibition was described by relating (i) MAPK pathway inhibition to apoptosis, (ii) mitotic cell density to tumour growth and, under resistant conditions, (iii) retinoblastoma protein inhibition to cell survival. Results: The model successfully described vemurafenib/abemaciclib-mediated changes in MAPK pathway and cell cycle biomarkers. Initial tumour shrinkage by vemurafenib, acquisition of resistance and subsequent abemaciclib-mediated efficacy were successfully captured and externally validated. Model simulations illustrate the benefit of intermittent vemurafenib therapy over continuous treatment, and indicate that continuous abemaciclib in combination with intermittent vemurafenib offers the potential for considerable tumour regression. Conclusions: The quantitative pharmacology of the abemaciclib/vemurafenib combination was successfully characterised and an optimised, clinically-relevant dosing strategy was identified. PMID:26978007

  13. Optimising the design and operation of semi-continuous affinity chromatography for clinical and commercial manufacture.

    PubMed

    Pollock, James; Bolton, Glen; Coffman, Jon; Ho, Sa V; Bracewell, Daniel G; Farid, Suzanne S

    2013-04-05

    This paper presents an integrated experimental and modelling approach to evaluate the potential of semi-continuous chromatography for the capture of monoclonal antibodies (mAb) in clinical and commercial manufacture. Small-scale single-column experimental breakthrough studies were used to derive design equations for the semi-continuous affinity chromatography system. Verification runs with the semi-continuous 3-column and 4-column periodic counter current (PCC) chromatography system indicated the robustness of the design approach. The product quality profiles and step yields (after wash step optimisation) achieved were comparable to the standard batch process. The experimentally-derived design equations were incorporated into a decisional tool comprising dynamic simulation, process economics and sizing optimisation. The decisional tool was used to evaluate the economic and operational feasibility of whole mAb bioprocesses employing PCC affinity capture chromatography versus standard batch chromatography across a product's lifecycle from clinical to commercial manufacture. The tool predicted that PCC capture chromatography would offer more significant savings in direct costs for early-stage clinical manufacture (proof-of-concept) (∼30%) than for late-stage clinical (∼10-15%) or commercial (∼5%) manufacture. The evaluation also highlighted the potential facility fit issues that could arise with a capture resin (MabSelect) that experiences losses in binding capacity when operated in continuous mode over lengthy commercial campaigns. Consequently, the analysis explored the scenario of adopting the PCC system for clinical manufacture and switching to the standard batch process following product launch. The tool determined the PCC system design required to operate at commercial scale without facility fit issues and with similar costs to the standard batch process whilst pursuing a process change application. A retrofitting analysis established that the direct cost savings obtained by 8 proof-of-concept batches would be sufficient to pay back the investment cost of the pilot-scale semi-continuous chromatography system. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  15. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  16. Modelisation de photodetecteurs a base de matrices de diodes avalanche monophotoniques pour tomographie d'emission par positrons

    NASA Astrophysics Data System (ADS)

    Corbeil Therrien, Audrey

    La tomographie d'emission par positrons (TEP) est un outil precieux en recherche preclinique et pour le diagnostic medical. Cette technique permet d'obtenir une image quantitative de fonctions metaboliques specifiques par la detection de photons d'annihilation. La detection des ces photons se fait a l'aide de deux composantes. D'abord, un scintillateur convertit l'energie du photon 511 keV en photons du spectre visible. Ensuite, un photodetecteur convertit l'energie lumineuse en signal electrique. Recemment, les photodiodes avalanche monophotoniques (PAMP) disposees en matrice suscitent beaucoup d'interet pour la TEP. Ces matrices forment des detecteurs sensibles, robustes, compacts et avec une resolution en temps hors pair. Ces qualites en font un photodetecteur prometteur pour la TEP, mais il faut optimiser les parametres de la matrice et de l'electronique de lecture afin d'atteindre les performances optimales pour la TEP. L'optimisation de la matrice devient rapidement une operation difficile, car les differents parametres interagissent de maniere complexe avec les processus d'avalanche et de generation de bruit. Enfin, l'electronique de lecture pour les matrices de PAMP demeure encore rudimentaire et il serait profitable d'analyser differentes strategies de lecture. Pour repondre a cette question, la solution la plus economique est d'utiliser un simulateur pour converger vers la configuration donnant les meilleures performances. Les travaux de ce memoire presentent le developpement d'un tel simulateur. Celui-ci modelise le comportement d'une matrice de PAMP en se basant sur les equations de physique des semiconducteurs et des modeles probabilistes. Il inclut les trois principales sources de bruit, soit le bruit thermique, les declenchements intempestifs correles et la diaphonie optique. Le simulateur permet aussi de tester et de comparer de nouvelles approches pour l'electronique de lecture plus adaptees a ce type de detecteur. Au final, le simulateur vise a quantifier l'impact des parametres du photodetecteur sur la resolution en energie et la resolution en temps et ainsi optimiser les performances de la matrice de PAMP. Par exemple, l'augmentation du ratio de surface active ameliore les performances, mais seulement jusqu'a un certain point. D'autres phenomenes lies a la surface active, comme le bruit thermique, provoquent une degradation du resultat. Le simulateur nous permet de trouver un compromis entre ces deux extremes. Les simulations avec les parametres initiaux demontrent une efficacite de detection de 16,7 %, une resolution en energie de 14,2 % LMH et une resolution en temps de 0.478 ns LMH. Enfin, le simulateur propose, bien qu'il vise une application en TEP, peut etre adapte pour d'autres applications en modifiant la source de photons et en adaptant les objectifs de performances. Mots-cles : Photodetecteurs, photodiodes avalanche monophotoniques, semiconducteurs, tomographie d'emission par positrons, simulations, modelisation, detection monophotonique, scintillateurs, circuit d'etouffement, SPAD, SiPM, Photodiodes avalanche operees en mode Geiger

  17. Isolation of 236U and 239,240Pu from seawater samples and its determination by Accelerator Mass Spectrometry.

    PubMed

    López-Lora, Mercedes; Chamizo, Elena; Villa-Alfageme, María; Hurtado-Bermúdez, Santiago; Casacuberta, Núria; García-León, Manuel

    2018-02-01

    In this work we present and evaluate a radiochemical procedure optimised for the analysis of 236 U and 239,240 Pu in seawater samples by Accelerator Mass Spectrometry (AMS). The method is based on Fe(OH) 3 co-precipitation of actinides and uses TEVA® and UTEVA® extraction chromatography resins in a simplified way for the final U and Pu purification. In order to improve the performance of the method, the radiochemical yields are analysed in 1 to 10L seawater volumes using alpha spectrometry (AS) and Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Robust 80% plutonium recoveries are obtained; however, it is found that Fe(III) concentration in the precipitation solution and sample volume are the two critical and correlated parameters influencing the initial uranium extraction through Fe(OH) 3 co-precipitation. Therefore, we propose an expression that optimises the sample volume and Fe(III) amounts according to both the 236 U and 239,240 Pu concentrations in the samples and the performance parameters of the AMS facility. The method is validated for the current setup of the 1MV AMS system (CNA, Sevilla, Spain), where He gas is used as a stripper, by analysing a set of intercomparison seawater samples, together with the Laboratory of Ion Beam Physics (ETH, Zürich, Switzerland). Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation

    NASA Astrophysics Data System (ADS)

    MacNish, Cara

    2007-12-01

    Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.

  19. Use of anti-TNF drug levels to optimise patient management

    PubMed Central

    Papamichael, Konstantinos; Cheifetz, Adam S

    2016-01-01

    Anti-tumour necrosis factor (TNF) therapies, such as infliximab, adalimumab, certolizumab pegol and golimumab, have been proven to be effective for the treatment of patients with Crohn's disease and ulcerative colitis. However, 10%–30% of patients with inflammatory bowel disease (IBD) show no initial clinical benefit to anti-TNF therapy (primary non-response), and over 50% after an initial favourable outcome will lose response over time (secondary loss of response (SLR)). Numerous recent studies in IBD have revealed an exposure–response relationship suggesting a positive correlation between high serum anti-TNF concentrations and favourable therapeutic outcomes including clinical, biomarker and endoscopic remission, whereas antidrug antibodies have been associated with SLR and infusion reactions. Currently, therapeutic drug monitoring (TDM) is typically performed when treatment failure occurs either for SLR, drug intolerance (potential immune-mediated reaction) or infusion reaction (reactive TDM). Nevertheless, recent data demonstrate that proactive TDM and a treat-to-target (trough) therapeutic approach may more effectively optimise anti-TNF therapy efficacy, safety and cost. However, implementing TDM in real-life clinical practice is currently limited by the diversity in study design, therapeutic outcomes and assays used, which have hindered the identification of robust clinically relevant concentration thresholds. This review will focus mainly on the pharmacodynamic properties of anti-TNF therapy and the role of TDM in guiding therapeutic decisions in IBD. PMID:28839870

  20. Optimising the use of linked administrative data for infectious diseases research in Australia.

    PubMed

    Moore, Hannah C; Blyth, Christopher C

    2018-06-14

    Infectious diseases remain a major cause of morbidity in Australia. A wealth of data exists in administrative datasets, which are linked through established data-linkage infrastructure in most Australian states and territories. These linkages can support robust studies to investigate the burden of disease, the relative contribution of various aetiological agents to disease, and the effectiveness of population-based prevention policies - research that is critical to the success of current and future vaccination programs. At a recent symposium in Perth, epidemiologists, clinicians and policy makers in the infectious diseases field discussed the various benefits of, and barriers to, data-linkage research, with a focus on respiratory infection research. A number of issues and recommendations emerged. The demand for data-linkage projects is starting to outweigh the capabilities of exisiting data-linkage infrastructure. There is a need to further streamline processes relating to data access, increase data sharing and conduct nationally collaborative projects. Concerns about data security and sharing across jurisdictional borders can be addressed through multiple safe data solutions. Researchers need to do more to ensure that the benefits of linking datasets to answer policy-relevant questions are being realised for the benefit of community groups, government authorities, funding bodies and policy makers. Increased collaboration and engagement across all sectors can optimise the use of linked data to help reduce the burden of infectious diseases.

  1. Multi-response optimisation of ultrasound-assisted extraction for recovery of flavonoids from red grape skins using response surface methodology.

    PubMed

    Tomaz, Ivana; Maslov, Luna; Stupić, Domagoj; Preiner, Darko; Ašperger, Danijela; Karoglan Kontić, Jasminka

    2016-01-01

    For the characterisation of grape cultivars, the profile and content of flavonoids are important because these compounds impact grape and wine quality. To determine the correct profile and content of flavonoids, the use of robust, sensitive and reliable methods is necessary. The object of this research is to develop a new ultrasound-assisted extraction (UAE) method for the recovery of flavonoids from grape skins using response surface methodology. Optimisation of UAE was performed using a complementary study combining a Box-Behnken experimental design with qualitative analysis by high-performance liquid chromatography. Optimal extraction conditions were obtained using the extraction solvent composed of acetonitrile:water:formic acid (26:73:1, v/v/v) at an extraction temperature of 50 °C, an extraction time of 15 min in a single-extraction step and with a solid-to-solvent ratio of 1:80 g/mL. The calculated relative standard deviations for the optimal extraction method were very low, measuring less than 5%. This study demonstrates that numerous factors have strong effects on the extraction efficiency, including the type of organic modifier and its percentage in the extraction solvent, the number of extraction steps, the solid-to-solvent ratio, the extraction time and temperature and, finally, the particular nature of analyte and their position within the grape skin cell. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  3. Modelling the endothelial blood-CNS barriers: a method for the production of robust in vitro models of the rat blood-brain barrier and blood-spinal cord barrier

    PubMed Central

    2013-01-01

    Background Modelling the blood-CNS barriers of the brain and spinal cord in vitro continues to provide a considerable challenge for research studying the passage of large and small molecules in and out of the central nervous system, both within the context of basic biology and for pharmaceutical drug discovery. Although there has been considerable success over the previous two decades in establishing useful in vitro primary endothelial cell cultures from the blood-CNS barriers, no model fully mimics the high electrical resistance, low paracellular permeability and selective influx/efflux characteristics of the in vivo situation. Furthermore, such primary-derived cultures are typically labour-intensive and generate low yields of cells, limiting scope for experimental work. We thus aimed to establish protocols for the high yield isolation and culture of endothelial cells from both rat brain and spinal cord. Our aim was to optimise in vitro conditions for inducing phenotypic characteristics in these cells that were reminiscent of the in vivo situation, such that they developed into tight endothelial barriers suitable for performing investigative biology and permeability studies. Methods Brain and spinal cord tissue was taken from the same rats and used to specifically isolate endothelial cells to reconstitute as in vitro blood-CNS barrier models. Isolated endothelial cells were cultured to expand the cellular yield and then passaged onto cell culture inserts for further investigation. Cell culture conditions were optimised using commercially available reagents and the resulting barrier-forming endothelial monolayers were characterised by functional permeability experiments and in vitro phenotyping by immunocytochemistry and western blotting. Results Using a combination of modified handling techniques and cell culture conditions, we have established and optimised a protocol for the in vitro culture of brain and, for the first time in rat, spinal cord endothelial cells. High yields of both CNS endothelial cell types can be obtained, and these can be passaged onto large numbers of cell culture inserts for in vitro permeability studies. The passaged brain and spinal cord endothelial cells are pure and express endothelial markers, tight junction proteins and intracellular transport machinery. Further, both models exhibit tight, functional barrier characteristics that are discriminating against large and small molecules in permeability assays and show functional expression of the pharmaceutically important P-gp efflux transporter. Conclusions Our techniques allow the provision of high yields of robust sister cultures of endothelial cells that accurately model the blood-CNS barriers in vitro. These models are ideally suited for use in studying the biology of the blood-brain barrier and blood-spinal cord barrier in vitro and for pre-clinical drug discovery. PMID:23773766

  4. Modelling the endothelial blood-CNS barriers: a method for the production of robust in vitro models of the rat blood-brain barrier and blood-spinal cord barrier.

    PubMed

    Watson, P Marc D; Paterson, Judy C; Thom, George; Ginman, Ulrika; Lundquist, Stefan; Webster, Carl I

    2013-06-18

    Modelling the blood-CNS barriers of the brain and spinal cord in vitro continues to provide a considerable challenge for research studying the passage of large and small molecules in and out of the central nervous system, both within the context of basic biology and for pharmaceutical drug discovery. Although there has been considerable success over the previous two decades in establishing useful in vitro primary endothelial cell cultures from the blood-CNS barriers, no model fully mimics the high electrical resistance, low paracellular permeability and selective influx/efflux characteristics of the in vivo situation. Furthermore, such primary-derived cultures are typically labour-intensive and generate low yields of cells, limiting scope for experimental work. We thus aimed to establish protocols for the high yield isolation and culture of endothelial cells from both rat brain and spinal cord. Our aim was to optimise in vitro conditions for inducing phenotypic characteristics in these cells that were reminiscent of the in vivo situation, such that they developed into tight endothelial barriers suitable for performing investigative biology and permeability studies. Brain and spinal cord tissue was taken from the same rats and used to specifically isolate endothelial cells to reconstitute as in vitro blood-CNS barrier models. Isolated endothelial cells were cultured to expand the cellular yield and then passaged onto cell culture inserts for further investigation. Cell culture conditions were optimised using commercially available reagents and the resulting barrier-forming endothelial monolayers were characterised by functional permeability experiments and in vitro phenotyping by immunocytochemistry and western blotting. Using a combination of modified handling techniques and cell culture conditions, we have established and optimised a protocol for the in vitro culture of brain and, for the first time in rat, spinal cord endothelial cells. High yields of both CNS endothelial cell types can be obtained, and these can be passaged onto large numbers of cell culture inserts for in vitro permeability studies. The passaged brain and spinal cord endothelial cells are pure and express endothelial markers, tight junction proteins and intracellular transport machinery. Further, both models exhibit tight, functional barrier characteristics that are discriminating against large and small molecules in permeability assays and show functional expression of the pharmaceutically important P-gp efflux transporter. Our techniques allow the provision of high yields of robust sister cultures of endothelial cells that accurately model the blood-CNS barriers in vitro. These models are ideally suited for use in studying the biology of the blood-brain barrier and blood-spinal cord barrier in vitro and for pre-clinical drug discovery.

  5. Robust optimization based energy dispatch in smart grids considering demand uncertainty

    NASA Astrophysics Data System (ADS)

    Nassourou, M.; Puig, V.; Blesa, J.

    2017-01-01

    In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.

  6. New Trends in Forging Technologies

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Hagen, T.; Knigge, J.; Elgaly, I.; Hadifi, T.; Bouguecha, A.

    2011-05-01

    Limited natural resources increase the demand on highly efficient machinery and transportation means. New energy-saving mobility concepts call for design optimisation through downsizing of components and choice of corrosion resistant materials possessing high strength to density ratios. Component downsizing can be performed either by constructive structural optimisation or by substituting heavy materials with lighter high-strength ones. In this context, forging plays an important role in manufacturing load-optimised structural components. At the Institute of Metal Forming and Metal-Forming Machines (IFUM) various innovative forging technologies have been developed. With regard to structural optimisation, different strategies for localised reinforcement of components were investigated. Locally induced strain hardening by means of cold forging under a superimposed hydrostatic pressure could be realised. In addition, controlled martensitic zones could be created through forming induced phase conversion in metastable austenitic steels. Other research focused on the replacement of heavy steel parts with high-strength nonferrous alloys or hybrid material compounds. Several forging processes of magnesium, aluminium and titanium alloys for different aeronautical and automotive applications were developed. The whole process chain from material characterisation via simulation-based process design to the production of the parts has been considered. The feasibility of forging complex shaped geometries using these alloys was confirmed. In spite of the difficulties encountered due to machine noise and high temperature, acoustic emission (AE) technique has been successfully applied for online monitoring of forging defects. New AE analysis algorithm has been developed, so that different signal patterns due to various events such as product/die cracking or die wear could be detected and classified. Further, the feasibility of the mentioned forging technologies was proven by means of the finite element analysis (FEA). For example, the integrity of forging dies with respect to crack initiation due to thermo-mechanical fatigue as well as the ductile damage of forgings was investigated with the help of cumulative damage models. In this paper some of the mentioned approaches are described.

  7. Effects of efforts to optimise morbidity and mortality rounds to serve contemporary quality improvement and educational goals: a systematic review.

    PubMed

    Smaggus, Andrew; Mrkobrada, Marko; Marson, Alanna; Appleton, Andrew

    2018-01-01

    The quality and safety movement has reinvigorated interest in optimising morbidity and mortality (M&M) rounds. We performed a systematic review to identify effective means of updating M&M rounds to (1) identify and address quality and safety issues, and (2) address contemporary educational goals. Relevant databases (Medline, Embase, PubMed, Education Resource Information Centre, Cumulative Index to Nursing and Allied Health Literature, Healthstar, and Global Health) were searched to identify primary sources. Studies were included if they (1) investigated an intervention applied to M&M rounds, (2) reported outcomes relevant to the identification of quality and safety issues, or educational outcomes relevant to quality improvement (QI), patient safety or general medical education and (3) included a control group. Study quality was assessed using the Medical Education Research Study Quality Instrument and Newcastle-Ottawa Scale-Education instruments. Given the heterogeneity of interventions and outcome measures, results were analysed thematically. The final analysis included 19 studies. We identified multiple effective strategies (updating objectives, standardising elements of rounds and attaching rounds to a formal quality committee) to optimise M&M rounds for a QI/safety purpose. These efforts were associated with successful integration of quality and safety content into rounds, and increased implementation of QI interventions. Consistent effects on educational outcomes were difficult to identify, likely due to the use of methodologies ill-fitted for educational research. These results are encouraging for those seeking to optimise the quality and safety mission of M&M rounds. However, the inability to identify consistent educational effects suggests the investigation of M&M rounds could benefit from additional methodologies (qualitative, mixed methods) in order to understand the complex mechanisms driving learning at M&M rounds. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. Subsampling for dataset optimisation

    NASA Astrophysics Data System (ADS)

    Ließ, Mareike

    2017-04-01

    Soil-landscapes have formed by the interaction of soil-forming factors and pedogenic processes. In modelling these landscapes in their pedodiversity and the underlying processes, a representative unbiased dataset is required. This concerns model input as well as output data. However, very often big datasets are available which are highly heterogeneous and were gathered for various purposes, but not to model a particular process or data space. As a first step, the overall data space and/or landscape section to be modelled needs to be identified including considerations regarding scale and resolution. Then the available dataset needs to be optimised via subsampling to well represent this n-dimensional data space. A couple of well-known sampling designs may be adapted to suit this purpose. The overall approach follows three main strategies: (1) the data space may be condensed and de-correlated by a factor analysis to facilitate the subsampling process. (2) Different methods of pattern recognition serve to structure the n-dimensional data space to be modelled into units which then form the basis for the optimisation of an existing dataset through a sensible selection of samples. Along the way, data units for which there is currently insufficient soil data available may be identified. And (3) random samples from the n-dimensional data space may be replaced by similar samples from the available dataset. While being a presupposition to develop data-driven statistical models, this approach may also help to develop universal process models and identify limitations in existing models.

  9. Location, timing and extent of wildfire vary by cause of ignition

    USGS Publications Warehouse

    Syphard, Alexandra D.; Keeley, Jon E.

    2015-01-01

    The increasing extent of wildfires has prompted investigation into alternative fire management approaches to complement the traditional strategies of fire suppression and fuels manipulation. Wildfire prevention through ignition reduction is an approach with potential for success, but ignitions result from a variety of causes. If some ignition sources result in higher levels of area burned, then ignition prevention programmes could be optimised to target these distributions in space and time. We investigated the most common ignition causes in two southern California sub-regions, where humans are responsible for more than 95% of all fires, and asked whether these causes exhibited distinct spatial or intra-annual temporal patterns, or resulted in different extents of fire in 10-29-year periods, depending on sub-region. Different ignition causes had distinct spatial patterns and those that burned the most area tended to occur in autumn months. Both the number of fires and area burned varied according to cause of ignition, but the cause of the most numerous fires was not always the cause of the greatest area burned. In both sub-regions, power line ignitions were one of the top two causes of area burned: the other major causes were arson in one sub-region and power equipment in the other. Equipment use also caused the largest number of fires in both sub-regions. These results have important implications for understanding why, where and how ignitions are caused, and in turn, how to develop strategies to prioritise and focus fire prevention efforts. Fire extent has increased tremendously in southern California, and because most fires are caused by humans, ignition reduction offers a potentially powerful management strategy, especially if optimised to reflect the distinct spatial and temporal distributions in different ignition causes.

  10. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  11. Production and Robustness of a Cacao Agroecosystem: Effects of Two Contrasting Types of Management Strategies

    PubMed Central

    Sabatier, Rodolphe; Wiegand, Kerstin; Meyer, Katrin

    2013-01-01

    Ecological intensification, i.e. relying on ecological processes to replace chemical inputs, is often presented as the ideal alternative to conventional farming based on an intensive use of chemicals. It is said to both maintain high yield and provide more robustness to the agroecosystem. However few studies compared the two types of management with respect to their consequences for production and robustness toward perturbation. In this study our aim is to assess productive performance and robustness toward diverse perturbations of a Cacao agroecosystem managed with two contrasting groups of strategies: one group of strategies relying on a high level of pesticides and a second relying on low levels of pesticides. We conducted this study using a dynamical model of a Cacao agroecosystem that includes Cacao production dynamics, and dynamics of three insects: a pest (the Cacao Pod Borer, Conopomorpha cramerella) and two characteristic but unspecified beneficial insects (a pollinator of Cacao and a parasitoid of the Cacao Pod Borer). Our results showed two opposite behaviors of the Cacao agroecosystem depending on its management, i.e. an agroecosystem relying on a high input of pesticides and showing low ecosystem functioning and an agroecosystem with low inputs, relying on a high functioning of the ecosystem. From the production point of view, no type of management clearly outclassed the other and their ranking depended on the type of pesticide used. From the robustness point of view, the two types of managements performed differently when subjected to different types of perturbations. Ecologically intensive systems were more robust to pest outbreaks and perturbations related to pesticide characteristics while chemically intensive systems were more robust to Cacao production and management-related perturbation. PMID:24312469

  12. Production and robustness of a Cacao agroecosystem: effects of two contrasting types of management strategies.

    PubMed

    Sabatier, Rodolphe; Wiegand, Kerstin; Meyer, Katrin

    2013-01-01

    Ecological intensification, i.e. relying on ecological processes to replace chemical inputs, is often presented as the ideal alternative to conventional farming based on an intensive use of chemicals. It is said to both maintain high yield and provide more robustness to the agroecosystem. However few studies compared the two types of management with respect to their consequences for production and robustness toward perturbation. In this study our aim is to assess productive performance and robustness toward diverse perturbations of a Cacao agroecosystem managed with two contrasting groups of strategies: one group of strategies relying on a high level of pesticides and a second relying on low levels of pesticides. We conducted this study using a dynamical model of a Cacao agroecosystem that includes Cacao production dynamics, and dynamics of three insects: a pest (the Cacao Pod Borer, Conopomorpha cramerella) and two characteristic but unspecified beneficial insects (a pollinator of Cacao and a parasitoid of the Cacao Pod Borer). Our results showed two opposite behaviors of the Cacao agroecosystem depending on its management, i.e. an agroecosystem relying on a high input of pesticides and showing low ecosystem functioning and an agroecosystem with low inputs, relying on a high functioning of the ecosystem. From the production point of view, no type of management clearly outclassed the other and their ranking depended on the type of pesticide used. From the robustness point of view, the two types of managements performed differently when subjected to different types of perturbations. Ecologically intensive systems were more robust to pest outbreaks and perturbations related to pesticide characteristics while chemically intensive systems were more robust to Cacao production and management-related perturbation.

  13. Contexts, concepts and cognition: principles for the transfer of basic science knowledge.

    PubMed

    Kulasegaram, Kulamakan M; Chaudhary, Zarah; Woods, Nicole; Dore, Kelly; Neville, Alan; Norman, Geoffrey

    2017-02-01

    Transfer of basic science aids novices in the development of clinical reasoning. The literature suggests that although transfer is often difficult for novices, it can be optimised by two complementary strategies: (i) focusing learners on conceptual knowledge of basic science or (ii) exposing learners to multiple contexts in which the basic science concepts may apply. The relative efficacy of each strategy as well as the mechanisms that facilitate transfer are unknown. In two sequential experiments, we compared both strategies and explored mechanistic changes in how learners address new transfer problems. Experiment 1 was a 2 × 3 design in which participants were randomised to learn three physiology concepts with or without emphasis on the conceptual structure of basic science via illustrative analogies and by means of one, two or three contexts during practice (operationalised as organ systems). Transfer of these concepts to explain pathologies in familiar organ systems (near transfer) and unfamiliar organ systems (far transfer) was evaluated during immediate and delayed testing. Experiment 2 examined whether exposure to conceptual analogies and multiple contexts changed how learners classified new problems. Experiment 1 showed that increasing context variation significantly improved far transfer performance but there was no difference between two and three contexts during practice. Similarly, the increased conceptual analogies led to higher performance for far transfer. Both interventions had independent but additive effects on overall performance. Experiment 2 showed that such analogies and context variation caused learners to shift to using structural characteristics to classify new problems even when there was superficial similarity to previous examples. Understanding problems based on conceptual structural characteristics is necessary for successful transfer. Transfer of basic science can be optimised by using multiple strategies that collectively emphasise conceptual structure. This means teaching must focus on conserved basic science knowledge and de-emphasise superficial features. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  14. Parameters estimation for reactive transport: A way to test the validity of a reactive model

    NASA Astrophysics Data System (ADS)

    Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme

    The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.

  15. One-way EPR steering and genuine multipartite EPR steering

    NASA Astrophysics Data System (ADS)

    He, Qiongyi; Reid, Margaret D.

    2012-11-01

    We propose criteria and experimental strategies to realise the Einstein-Podolsky-Rosen (EPR) steering nonlocality. One-way steering can be obtained where there is asymmetry of thermal noise on each system. We also present EPR steering inequalities that act as signatures and suggest how to optimise EPR correlations in specific schemes so that the genuine multipartite EPR steering nonlocality (EPR paradox) can also possibly be realised. The results presented here also apply to the spatially separated macroscopic atomic ensembles.

  16. Cascade defense via routing in complex networks

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Lan; Du, Wen-Bo; Hong, Chen

    2015-05-01

    As the cascading failures in networked traffic systems are becoming more and more serious, research on cascade defense in complex networks has become a hotspot in recent years. In this paper, we propose a traffic-based cascading failure model, in which each packet in the network has its own source and destination. When cascade is triggered, packets will be redistributed according to a given routing strategy. Here, a global hybrid (GH) routing strategy, which uses the dynamic information of the queue length and the static information of nodes' degree, is proposed to defense the network cascade. Comparing GH strategy with the shortest path (SP) routing, efficient routing (ER) and global dynamic (GD) routing strategies, we found that GH strategy is more effective than other routing strategies in improving the network robustness against cascading failures. Our work provides insight into the robustness of networked traffic systems.

  17. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  18. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  19. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  20. A robust control strategy for mitigating renewable energy fluctuations in a real hybrid power system combined with SMES

    NASA Astrophysics Data System (ADS)

    Magdy, G.; Shabib, G.; Elbaset, Adel A.; Qudaih, Yaser; Mitani, Yasunori

    2018-05-01

    Utilizing Renewable Energy Sources (RESs) is attracting great attention as a solution to future energy shortages. However, the irregular nature of RESs and random load deviations cause a large frequency and voltage fluctuations. Therefore, in order to benefit from a maximum capacity of the RESs, a robust mitigation strategy of power fluctuations from RESs must be applied. Hence, this paper proposes a design of Load Frequency Control (LFC) coordinated with Superconducting Magnetic Energy Storage (SMES) technology (i.e., an auxiliary LFC), using an optimal PID controller-based Particle Swarm Optimization (PSO) in the Egyptian Power System (EPS) considering high penetration of Photovoltaics (PV) power generation. Thus, from the perspective of LFC, the robust control strategy is proposed to maintain the nominal system frequency and mitigating the power fluctuations from RESs against all disturbances sources for the EPS with the multi-source environment. The EPS is decomposed into three dynamics subsystems, which are non-reheat, reheat and hydro power plants taking into consideration the system nonlinearity. The results by nonlinear simulation Matlab/Simulink for the EPS combined with SMES system considering PV solar power approves that, the proposed control strategy achieves a robust stability by reducing transient time, minimizing the frequency deviations, maintaining the system frequency, preventing conventional generators from exceeding their power ratings during load disturbances, and mitigating the power fluctuations from the RESs.

  1. Robustness of Controllability for Networks Based on Edge-Attack

    PubMed Central

    Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong

    2014-01-01

    We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components. PMID:24586507

  2. Robustness of controllability for networks based on edge-attack.

    PubMed

    Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong

    2014-01-01

    We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components.

  3. Robustness analysis of complex networks with power decentralization strategy via flow-sensitive centrality against cascading failures

    NASA Astrophysics Data System (ADS)

    Guo, Wenzhang; Wang, Hao; Wu, Zhengping

    2018-03-01

    Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.

  4. Simulated performance of an order statistic threshold strategy for detection of narrowband signals

    NASA Technical Reports Server (NTRS)

    Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.

    1988-01-01

    The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.

  5. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  6. A CONCEPTUAL FRAMEWORK FOR MANAGING RADIATION DOSE TO PATIENTS IN DIAGNOSTIC RADIOLOGY USING REFERENCE DOSE LEVELS.

    PubMed

    Almén, Anja; Båth, Magnus

    2016-06-01

    The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  8. [The new strategy of the British health system: reflections on the changes in British health care system in the light of the WHO report on the financing of health systems worldwide].

    PubMed

    Vaccari, Vittorio; Passerino, Costantino; Giagnorio, Maria Laura

    2011-01-01

    The search for a strategy that can optimise resources far the financing of health systems is currently the subject of numerous worldwide experiments. This interest stems from the fact that in most countries, although having each one different specific characteristics, governments try to improve the efficiency and equity of health care. This worle analyses how innovative financing options at national level can be combined with decision-making processes typical of quality management to devise strategies far funding health services that are oriented towards their continuous improvement. The paper discusses, in particular, the strategy adopted in England, where the new law Equity and Excellence, liberating the NHS radically changes the management of the NHS, giving patients the choice of using different types of structures and therefore the possibility to find the most convenient combination in order to obtain the required service.

  9. An integrated control strategy for the composite braking system of an electric vehicle with independently driven axles

    NASA Astrophysics Data System (ADS)

    Sun, Fengchun; Liu, Wei; He, Hongwen; Guo, Hongqiang

    2016-08-01

    For an electric vehicle with independently driven axles, an integrated braking control strategy was proposed to coordinate the regenerative braking and the hydraulic braking. The integrated strategy includes three modes, namely the hybrid composite mode, the parallel composite mode and the pure hydraulic mode. For the hybrid composite mode and the parallel composite mode, the coefficients of distributing the braking force between the hydraulic braking and the two motors' regenerative braking were optimised offline, and the response surfaces related to the driving state parameters were established. Meanwhile, the six-sigma method was applied to deal with the uncertainty problems for reliability. Additionally, the pure hydraulic mode is activated to ensure the braking safety and stability when the predictive failure of the response surfaces occurs. Experimental results under given braking conditions showed that the braking requirements could be well met with high braking stability and energy regeneration rate, and the reliability of the braking strategy was guaranteed on general braking conditions.

  10. Divergence in plant and microbial allocation strategies explains continental patterns in microbial allocation and biogeochemical fluxes.

    PubMed

    Averill, Colin

    2014-10-01

    Allocation trade-offs shape ecological and biogeochemical phenomena at local to global scale. Plant allocation strategies drive major changes in ecosystem carbon cycling. Microbial allocation to enzymes that decompose carbon vs. organic nutrients may similarly affect ecosystem carbon cycling. Current solutions to this allocation problem prioritise stoichiometric tradeoffs implemented in plant ecology. These solutions may not maximise microbial growth and fitness under all conditions, because organic nutrients are also a significant carbon resource for microbes. I created multiple allocation frameworks and simulated microbial growth using a microbial explicit biogeochemical model. I demonstrate that prioritising stoichiometric trade-offs does not optimise microbial allocation, while exploiting organic nutrients as carbon resources does. Analysis of continental-scale enzyme data supports the allocation patterns predicted by this framework, and modelling suggests large deviations in soil C loss based on which strategy is implemented. Therefore, understanding microbial allocation strategies will likely improve our understanding of carbon cycling and climate. © 2014 John Wiley & Sons Ltd/CNRS.

  11. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  12. Basis for the development of sustainable optimisation indicators for activated sludge wastewater treatment plants in the Republic of Ireland.

    PubMed

    Gordon, G T; McCann, B P

    2015-01-01

    This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.

  13. Energy saving strategies of honeybees in dipping nectar

    PubMed Central

    Wu, Jianing; Yang, Heng; Yan, Shaoze

    2015-01-01

    The honeybee’s drinking process has generally been simplified because of its high speed and small scale. In this study, we clearly observed the drinking cycle of the Italian honeybee using a specially designed high-speed camera system. We analysed the pattern of glossal hair erection and the movement kinematics of the protracting tongue (glossa). Results showed that the honeybee used two special protraction strategies to save energy. First, the glossal hairs remain adpressed until the end of the protraction, which indicates that the hydraulic resistance is reduced to less than 1/3 of that in the case if the hairs remain erect. Second, the glossa protracts with a specific velocity profile and we quantitatively demonstrated that this moving strategy helps reduce the total energy needed for protraction compared with the typical form of protraction with constant acceleration and deceleration. These findings suggest effective methods to optimise the control policies employed by next-generation microfluidic pumps. PMID:26446300

  14. Energy saving strategies of honeybees in dipping nectar.

    PubMed

    Wu, Jianing; Yang, Heng; Yan, Shaoze

    2015-10-08

    The honeybee's drinking process has generally been simplified because of its high speed and small scale. In this study, we clearly observed the drinking cycle of the Italian honeybee using a specially designed high-speed camera system. We analysed the pattern of glossal hair erection and the movement kinematics of the protracting tongue (glossa). Results showed that the honeybee used two special protraction strategies to save energy. First, the glossal hairs remain adpressed until the end of the protraction, which indicates that the hydraulic resistance is reduced to less than 1/3 of that in the case if the hairs remain erect. Second, the glossa protracts with a specific velocity profile and we quantitatively demonstrated that this moving strategy helps reduce the total energy needed for protraction compared with the typical form of protraction with constant acceleration and deceleration. These findings suggest effective methods to optimise the control policies employed by next-generation microfluidic pumps.

  15. Enhancing robustness and immunization in geographical networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang Liang; Department of Physics, Lanzhou University, Lanzhou 730000; Yang Kongqing

    2007-03-15

    We find that different geographical structures of networks lead to varied percolation thresholds, although these networks may have similar abstract topological structures. Thus, strategies for enhancing robustness and immunization of a geographical network are proposed. Using the generating function formalism, we obtain an explicit form of the percolation threshold q{sub c} for networks containing arbitrary order cycles. For three-cycles, the dependence of q{sub c} on the clustering coefficients is ascertained. The analysis substantiates the validity of the strategies with analytical evidence.

  16. Robust analysis of an underwater navigational strategy in electrically heterogeneous corridors.

    PubMed

    Dimble, Kedar D; Ranganathan, Badri N; Keshavan, Jishnu; Humbert, J Sean

    2016-08-01

    Obstacles and other global stimuli provide relevant navigational cues to a weakly electric fish. In this work, robust analysis of a control strategy based on electrolocation for performing obstacle avoidance in electrically heterogeneous corridors is presented and validated. Static output feedback control is shown to achieve the desired goal of reflexive obstacle avoidance in such environments in simulation and experimentation. The proposed approach is computationally inexpensive and readily implementable on a small scale underwater vehicle, making underwater autonomous navigation feasible in real-time.

  17. Thermotaxis is a Robust Mechanism for Thermoregulation in C. elegans Nematodes

    PubMed Central

    Ramot, Daniel; MacInnis, Bronwyn L.; Lee, Hau-Chen; Goodman, Miriam B.

    2013-01-01

    Many biochemical networks are robust to variations in network or stimulus parameters. Although robustness is considered an important design principle of such networks, it is not known whether this principle also applies to higher-level biological processes such as animal behavior. In thermal gradients, C. elegans uses thermotaxis to bias its movement along the direction of the gradient. Here we develop a detailed, quantitative map of C. elegans thermotaxis and use these data to derive a computational model of thermotaxis in the soil, a natural environment of C. elegans. This computational analysis indicates that thermotaxis enables animals to avoid temperatures at which they cannot reproduce, to limit excursions from their adapted temperature, and to remain relatively close to the surface of the soil, where oxygen is abundant. Furthermore, our analysis reveals that this mechanism is robust to large variations in the parameters governing both worm locomotion and temperature fluctuations in the soil. We suggest that, similar to biochemical networks, animals evolve behavioral strategies that are robust, rather than strategies that rely on fine-tuning of specific behavioral parameters. PMID:19020047

  18. Improved packing of protein side chains with parallel ant colonies

    PubMed Central

    2014-01-01

    Introduction The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. Methods We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. Results We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. Conclusions This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms. PMID:25474164

  19. Development and optimisation of atorvastatin calcium loaded self-nanoemulsifying drug delivery system (SNEDDS) for enhancing oral bioavailability: in vitro and in vivo evaluation.

    PubMed

    Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M

    2017-05-01

    The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.

  20. Enhancing work-focused supports for people with severe mental illnesses in australia.

    PubMed

    Contreras, Natalia; Rossell, Susan L; Castle, David J; Fossey, Ellie; Morgan, Dea; Crosse, Caroline; Harvey, Carol

    2012-01-01

    Persons with severe mental illness (SMI) have reduced workforce participation, which leads to significant economic and social disadvantage. This theoretical review introduces the strategies that have been implemented to address this issue. These include Individual Placement and Support (IPS) services, the most widely researched form of supported employment, to which cognitive remediation has more recently been recognised in the USA, as an intervention to improve employment outcomes by addressing the cognitive impairments often experienced by people with SMI. The authors review the international literature and discuss specifically the Australian context. They suggest that Australia is in a prime position to engage clients in such a dual intervention, having had recent success with increasing access to supported employment programs and workforce reentry, through implementation of the Health Optimisation Program for Employment (HOPE). Such programs assist with gaining and maintaining employment. However, they do not address the cognitive issues that often prevent persons with SMI from effectively participating in work. Thus, optimising current interventions, with work-focused cognitive skills development is critical to enhancing employment rates that remain low for persons with SMI.

  1. Ranibizumab (Lucentis) in neovascular age-related macular degeneration: evidence from clinical trials.

    PubMed

    Mitchell, P; Korobelnik, J-F; Lanzetta, P; Holz, F G; Prünte, C; Schmidt-Erfurth, U; Tano, Y; Wolf, S

    2010-01-01

    Neovascular age-related macular degeneration (AMD) has a poor prognosis if left untreated, frequently resulting in legal blindness. Ranibizumab is approved for treating neovascular AMD. However, further guidance is needed to assist ophthalmologists in clinical practice to optimise treatment outcomes. An international retina expert panel assessed evidence available from prospective, multicentre studies evaluating different ranibizumab treatment schedules (ANCHOR, MARINA, PIER, SAILOR, SUSTAIN and EXCITE) and a literature search to generate evidence-based and consensus recommendations for treatment indication and assessment, retreatment and monitoring. Ranibizumab is indicated for choroidal neovascular lesions with active disease, the clinical parameters of which are outlined. Treatment initiation with three consecutive monthly injections, followed by continued monthly injections, has provided the best visual-acuity outcomes in pivotal clinical trials. If continued monthly injections are not feasible after initiation, a flexible strategy appears viable, with monthly monitoring of lesion activity recommended. Initiation regimens of fewer than three injections have not been assessed. Continuous careful monitoring with flexible retreatment may help avoid vision loss recurring. Standardised biomarkers need to be determined. Evidence-based guidelines will help to optimise treatment outcomes with ranibizumab in neovascular AMD.

  2. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    NASA Astrophysics Data System (ADS)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  3. Design Optimisation of a Magnetic Field Based Soft Tactile Sensor

    PubMed Central

    Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert

    2017-01-01

    This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787

  4. A simulation model to estimate the cost and effectiveness of alternative dialysis initiation strategies.

    PubMed

    Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A

    2006-01-01

    Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.

  5. Ultra-Precision Measurement and Control of Angle Motion in Piezo-Based Platforms Using Strain Gauge Sensors and a Robust Composite Controller

    PubMed Central

    Liu, Lei; Bai, Yu-Guang; Zhang, Da-Li; Wu, Zhi-Gang

    2013-01-01

    The measurement and control strategy of a piezo-based platform by using strain gauge sensors (SGS) and a robust composite controller is investigated in this paper. First, the experimental setup is constructed by using a piezo-based platform, SGS sensors, an AD5435 platform and two voltage amplifiers. Then, the measurement strategy to measure the tip/tilt angles accurately in the order of sub-μrad is presented. A comprehensive composite control strategy design to enhance the tracking accuracy with a novel driving principle is also proposed. Finally, an experiment is presented to validate the measurement and control strategy. The experimental results demonstrate that the proposed measurement and control strategy provides accurate angle motion with a root mean square (RMS) error of 0.21 μrad, which is approximately equal to the noise level. PMID:23860316

  6. Modeling and simulation of permanent magnet synchronous motor based on neural network control strategy

    NASA Astrophysics Data System (ADS)

    Luo, Bingyang; Chi, Shangjie; Fang, Man; Li, Mengchao

    2017-03-01

    Permanent magnet synchronous motor is used widely in industry, the performance requirements wouldn't be met by adopting traditional PID control in some of the occasions with high requirements. In this paper, a hybrid control strategy - nonlinear neural network PID and traditional PID parallel control are adopted. The high stability and reliability of traditional PID was combined with the strong adaptive ability and robustness of neural network. The permanent magnet synchronous motor will get better control performance when switch different working modes according to different controlled object conditions. As the results showed, the speed response adopting the composite control strategy in this paper was faster than the single control strategy. And in the case of sudden disturbance, the recovery time adopting the composite control strategy designed in this paper was shorter, the recovery ability and the robustness were stronger.

  7. Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics

    NASA Astrophysics Data System (ADS)

    Vass, J.; Šmíd, R.; Randall, R. B.; Sovka, P.; Cristalli, C.; Torcianti, B.

    2008-04-01

    This paper presents a statistical technique to enhance vibration signals measured by laser Doppler vibrometry (LDV). The method has been optimised for LDV signals measured on bearings of universal electric motors and applied to quality control of washing machines. Inherent problems of LDV are addressed, particularly the speckle noise occurring when rough surfaces are measured. The presence of speckle noise is detected using a new scalar indicator kurtosis ratio (KR), specifically designed to quantify the amount of random impulses generated by this noise. The KR is a ratio of the standard kurtosis and a robust estimate of kurtosis, thus indicating the outliers in the data. Since it is inefficient to reject the signals affected by the speckle noise, an algorithm for selecting an undistorted portion of a signal is proposed. The algorithm operates in the time domain and is thus fast and simple. The algorithm includes band-pass filtering and segmentation of the signal, as well as thresholding of the KR computed for each filtered signal segment. Algorithm parameters are discussed in detail and instructions for optimisation are provided. Experimental results demonstrate that speckle noise is effectively avoided in severely distorted signals, thus improving the signal-to-noise ratio (SNR) significantly. Typical faults are finally detected using squared envelope analysis. It is also shown that the KR of the band-pass filtered signal is related to the spectral kurtosis (SK).

  8. Model-based design of an agricultural biogas plant: application of anaerobic digestion model no.1 for an improved four chamber scheme.

    PubMed

    Wett, B; Schoen, M; Phothilangka, P; Wackerle, F; Insam, H

    2007-01-01

    Different digestion technologies for various substrates are addressed by the generic process description of Anaerobic Digestion Model No. 1. In the case of manure or agricultural wastes a priori knowledge about the substrate in terms of ADM1 compounds is lacking and influent characterisation becomes a major issue. The actual project has been initiated for promotion of biogas technology in agriculture and for expansion of profitability also to rather small capacity systems. In order to avoid costly individual planning and installation of each facility a standardised design approach needs to be elaborated. This intention pleads for bio kinetic modelling as a systematic tool for process design and optimisation. Cofermentation under field conditions was observed, quality data and flow data were recorded and mass flow balances were calculated. In the laboratory different substrates have been digested separately in parallel under specified conditions. A configuration of four ADM1 model reactors was set up. Model calibration identified disintegration rate, decay rates for sugar degraders and half saturation constant for sugar as the three most sensitive parameters showing values (except the latter) about one order of magnitude higher than default parameters. Finally, the model is applied to the comparison of different reactor configurations and volume partitions. Another optimisation objective is robustness and load flexibility, i.e. the same configuration should be adaptive to different load situations only by a simple recycle control in order to establish a standardised design.

  9. Diagnostics in a digital age: an opportunity to strengthen health systems and improve health outcomes.

    PubMed

    Peeling, Rosanna W

    2015-11-01

    Diagnostics play a critical role in clinical decision making, and in disease control and prevention. Rapid point-of-care (POC) tests for infectious diseases can improve access to diagnosis and patient management, but the quality of these tests vary, quality of testing is often not assured and there are few mechanisms to capture test results for surveillance when the testing is so decentralised. A new generation of POC molecular tests that are highly sensitive and specific, robust and easy to use are now available for deployment in low resource settings. Decentralisation of testing outside of the laboratory can put tremendous stress on the healthcare system and presents challenges for training and quality assurance. A feature of many of these POC molecular devices is that they are equipped with data transmission capacities. In a digital age, it is possible to link data from diagnostic laboratories and POC test readers and devices to provide data on testing coverage, disease trends and timely information for early warning of infectious disease outbreaks to inform design or optimisation of disease control and elimination programmes. Data connectivity also allows control programmes to monitor the quality of tests and testing, and optimise supply chain management; thus, increasing the efficiency of healthcare systems and improving patient outcomes. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Attack Vulnerability of Network Controllability

    PubMed Central

    2016-01-01

    Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability. PMID:27588941

  11. Attack Vulnerability of Network Controllability.

    PubMed

    Lu, Zhe-Ming; Li, Xin-Feng

    2016-01-01

    Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability.

  12. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  13. Mechanisms for Robust Cognition.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A

    2015-08-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition. © 2014 Cognitive Science Society, Inc.

  14. Screening of marine bacterial producers of polyunsaturated fatty acids and optimisation of production.

    PubMed

    Abd El Razak, Ahmed; Ward, Alan C; Glassey, Jarka

    2014-02-01

    Water samples from three different environments including Mid Atlantic Ridge, Red Sea and Mediterranean Sea were screened in order to isolate new polyunsaturated fatty acids (PUFAs) bacterial producers especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Two hundred and fifty-one isolates were screened for PUFA production and among them the highest number of producers was isolated from the Mid-Atlantic Ridge followed by the Red Sea while no producers were found in the Mediterranean Sea samples. The screening strategy included a simple colourimetric method followed by a confirmation via GC/MS. Among the tested producers, an isolate named 66 was found to be a potentially high PUFA producer producing relatively high levels of EPA in particular. A Plackett-Burman statistical design of experiments was applied to screen a wide number of media components identifying glycerol and whey as components of a production medium. The potential low-cost production medium was optimised by applying a response surface methodology to obtain the highest productivity converting industrial by-products into value-added products. The maximum achieved productivity of EPA was 20 mg/g, 45 mg/l, representing 11% of the total fatty acids, which is approximately five times more than the amount produced prior to optimisation. The production medium composition was 10.79 g/l whey and 6.87 g/l glycerol. To our knowledge, this is the first investigation of potential bacteria PUFA producers from Mediterranean and Red Seas providing an evaluation of a colourimetric screening method as means of rapid screening of a large number of isolates.

  15. Using marketing theory to inform strategies for recruitment: a recruitment optimisation model and the txt2stop experience

    PubMed Central

    2014-01-01

    Background Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Methods Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Results Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Conclusions Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials. PMID:24886627

  16. Using marketing theory to inform strategies for recruitment: a recruitment optimisation model and the txt2stop experience.

    PubMed

    Galli, Leandro; Knight, Rosemary; Robertson, Steven; Hoile, Elizabeth; Oladapo, Olubukola; Francis, David; Free, Caroline

    2014-05-22

    Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials.

  17. On the robust optimization to the uncertain vaccination strategy problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id

    2014-02-21

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccinationmore » strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.« less

  18. 77 FR 38051 - EPA Activities To Promote Environmental Justice in the Permit Application Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-26

    ... (EPA). In 2011, EPA published Plan EJ 2014, the Agency's overarching strategy for advancing... robust community engagement strategies that recognize the value of community outreach. Pursuant to these strategies, facilities engage actively with the community through environmental initiatives, neighborhood...

  19. Research on cascading failure in multilayer network with different coupling preference

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Jin, Lei; Wang, Xiao Juan

    This paper is aimed at constructing robust multilayer networks against cascading failure. Considering link protection strategies in reality, we design a cascading failure model based on load distribution and extend it to multilayer. We use the cascading failure model to deduce the scale of the largest connected component after cascading failure, from which we can find that the performance of four kinds of load distribution strategies associates with the load ratio of the current edge to its adjacent edge. Coupling preference is a typical characteristic in multilayer networks which corresponds to the network robustness. The coupling preference of multilayer networks is divided into two forms: the coupling preference in layers and the coupling preference between layers. To analyze the relationship between the coupling preference and the multilayer network robustness, we design a construction algorithm to generate multilayer networks with different coupling preferences. Simulation results show that the load distribution based on the node betweenness performs the best. When the coupling coefficient in layers is zero, the scale-free network is the most robust. In the random network, the assortative coupling in layers is more robust than the disassortative coupling. For the coupling preference between layers, the assortative coupling between layers is more robust than the disassortative coupling both in the scale free network and the random network.

  20. The generalised isodamping approach for robust fractional PID controllers design

    NASA Astrophysics Data System (ADS)

    Beschi, M.; Padula, F.; Visioli, A.

    2017-06-01

    In this paper, we present a novel methodology to design fractional-order proportional-integral-derivative controllers. Based on the description of the controlled system by means of a family of linear models parameterised with respect to a free variable that describes the real process operating point, we design the controller by solving a constrained min-max optimisation problem where the maximum sensitivity has to be minimised. Among the imposed constraints, the most important one is the new generalised isodamping condition, that defines the invariancy of the phase margin with respect to the free parameter variations. It is also shown that the well-known classical isodamping condition is a special case of the new technique proposed in this paper. Simulation results show the effectiveness of the proposed technique and the superiority of the fractional-order controller compared to its integer counterpart.

  1. Manufacturing the Gas Diffusion Layer for PEM Fuel Cell Using a Novel 3D Printing Technique and Critical Assessment of the Challenges Encountered

    PubMed Central

    Singamneni, Sarat; Ramos, Maximiano; Al-Jumaily, Ahmed M

    2017-01-01

    The conventional gas diffusion layer (GDL) of polymer electrolyte membrane (PEM) fuel cells incorporates a carbon-based substrate, which suffers from electrochemical oxidation as well as mechanical degradation, resulting in reduced durability and performance. In addition, it involves a complex manufacturing process to produce it. The proposed technique aims to resolve both these issues by an advanced 3D printing technique, namely selective laser sintering (SLS). In the proposed work, polyamide (PA) is used as the base powder and titanium metal powder is added at an optimised level to enhance the electrical conductivity, thermal, and mechanical properties. The application of selective laser sintering to fabricate a robust gas diffusion substrate for PEM fuel cell applications is quite novel and is attempted here for the first time. PMID:28773156

  2. Manufacturing the Gas Diffusion Layer for PEM Fuel Cell Using a Novel 3D Printing Technique and Critical Assessment of the Challenges Encountered.

    PubMed

    Jayakumar, Arunkumar; Singamneni, Sarat; Ramos, Maximiano; Al-Jumaily, Ahmed M; Pethaiah, Sethu Sundar

    2017-07-14

    The conventional gas diffusion layer (GDL) of polymer electrolyte membrane (PEM) fuel cells incorporates a carbon-based substrate, which suffers from electrochemical oxidation as well as mechanical degradation, resulting in reduced durability and performance. In addition, it involves a complex manufacturing process to produce it. The proposed technique aims to resolve both these issues by an advanced 3D printing technique, namely selective laser sintering (SLS). In the proposed work, polyamide (PA) is used as the base powder and titanium metal powder is added at an optimised level to enhance the electrical conductivity, thermal, and mechanical properties. The application of selective laser sintering to fabricate a robust gas diffusion substrate for PEM fuel cell applications is quite novel and is attempted here for the first time.

  3. Determination of capsaicinoids in topical cream by liquid-liquid extraction and liquid chromatography.

    PubMed

    Kaale, Eliangiringa; Van Schepdael, Ann; Roets, Eugène; Hoogmartens, Jos

    2002-11-07

    A reversed-phase liquid chromatography (LC) method has been developed, optimised and validated for the separation and quantitation of capsaicin (CP) and dihydrocapsaicin (DHCP) in a topical cream formulation. Sample preparation involves liquid-liquid extraction prior to LC analysis. The method uses a Hypersil C(18) BDS, 5 micrometer, 250x4.6 mm I.D. column maintained at 35 degrees C. The mobile phase comprises methanol, water, acetonitrile (ACN) and acetic acid (47:42:10:1, v/v/v/v) at a flow rate of 1.0 ml/min. Robustness was evaluated by performing a central composite face-centred design (CCF) experiment. The method shows good selectivity, linearity, sensitivity and repeatability. The conditions allow the separation and quantitation of CP and DHCP without interference from the other substances contained in the cream.

  4. The Energy-Efficient Quarry: Towards improved understanding and optimisation of energy use and minimisation of CO2 generation in the aggregates industry.

    NASA Astrophysics Data System (ADS)

    Hill, Ian; White, Toby; Owen, Sarah

    2014-05-01

    Extraction and processing of rock materials to produce aggregates is carried out at some 20,000 quarries across the EU. All stages of the processing and transport of hard and dense materials inevitably consume high levels of energy and have consequent significant carbon footprints. The FP7 project "the Energy Efficient Quarry" (EE-Quarry) has been addressing this problem and has devised strategies, supported by modelling software, to assist the quarrying industry to assess and optimise its energy use, and to minimise its carbon footprint. Aggregate quarries across Europe vary enormously in the scale of the quarrying operations, the nature of the worked mineral, and the processing to produce a final market product. Nevertheless most quarries involve most or all of a series of essential stages; deposit assessment, drilling and blasting, loading and hauling, and crushing and screening. The process of determining the energy-efficiency of each stage is complex, but is broadly understood in principle and there are numerous sources of information and guidance available in the literature and on-line. More complex still is the interaction between each of these stages. For example, using a little more energy in blasting to increase fragmentation may save much greater energy in later crushing and screening, but also generate more fines material which is discarded as waste and the embedded energy in this material is lost. Thus the calculation of the embedded energy in the waste material becomes an input to the determination of the blasting strategy. Such feedback loops abound in the overall quarry optimisation. The project has involved research and demonstration operations at a number of quarries distributed across Europe carried out by all partners in the EE-Quarry project, working in collaboration with many of the major quarrying companies operating in the EU. The EE-Quarry project is developing a sophisticated modelling tool, the "EE-Quarry Model" available to the quarrying industry on a web-based platform. This tool guides quarry managers and operators through the complex, multi-layered, iterative, process of assessing the energy efficiency of their own quarry operation. They are able to evaluate the optimisation of the energy-efficiency of the overall quarry through examining both the individual stages of processing, and the interactions between them. The project is also developing on-line distance learning modules designed for Continuous Professional Development (CPD) activities for staff across the quarrying industry in the EU and beyond. The presentation will describe development of the model, and the format and scope of the resulting software tool and its user-support available to the quarrying industry.

  5. Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area

    NASA Astrophysics Data System (ADS)

    Khare, Vikas; Nema, Savita; Baredar, Prashant

    2017-04-01

    This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.

  6. Robust artifactual independent component classification for BCI practitioners.

    PubMed

    Winkler, Irene; Brandl, Stephanie; Horn, Franziska; Waldburger, Eric; Allefeld, Carsten; Tangermann, Michael

    2014-06-01

    EEG artifacts of non-neural origin can be separated from neural signals by independent component analysis (ICA). It is unclear (1) how robustly recently proposed artifact classifiers transfer to novel users, novel paradigms or changed electrode setups, and (2) how artifact cleaning by a machine learning classifier impacts the performance of brain-computer interfaces (BCIs). Addressing (1), the robustness of different strategies with respect to the transfer between paradigms and electrode setups of a recently proposed classifier is investigated on offline data from 35 users and 3 EEG paradigms, which contain 6303 expert-labeled components from two ICA and preprocessing variants. Addressing (2), the effect of artifact removal on single-trial BCI classification is estimated on BCI trials from 101 users and 3 paradigms. We show that (1) the proposed artifact classifier generalizes to completely different EEG paradigms. To obtain similar results under massively reduced electrode setups, a proposed novel strategy improves artifact classification. Addressing (2), ICA artifact cleaning has little influence on average BCI performance when analyzed by state-of-the-art BCI methods. When slow motor-related features are exploited, performance varies strongly between individuals, as artifacts may obstruct relevant neural activity or are inadvertently used for BCI control. Robustness of the proposed strategies can be reproduced by EEG practitioners as the method is made available as an EEGLAB plug-in.

  7. Demographics of the spawning aggregations of four catostomid species in the Savannah River, South Carolina and Georgia, USA

    USGS Publications Warehouse

    Grabowski, T.B.; Ratterman, N.L.; Isely, J.J.

    2008-01-01

    Differences in the life history strategies employed by otherwise ecologically similar species of a fish assemblage may be an important factor in the coexistence of these species and is an essential consideration in the conservation and management of these assemblages. We collected scales to determine age and growth of four species of the catostomid assemblage (northern hogsucker Hypentelium nigricans, spotted sucker Minytrema melanops, notchlip redhorse Moxostoma collapsum and robust redhorse Moxostoma robustum) of the Savannah River, Georgia-South Carolina in spring 2004 and 2005. Robust redhorse was the largest species; reaching sexual maturity at an older age and growing faster as a juvenile than the other species. Spotted sucker did not achieve the same size as robust redhorse, but reached sexual maturity at younger ages. Notchlip redhorse was intermediate between the abovementioned two species in age at maturity and size. Northern hogsucker was the smallest species of the assemblage and reached the sexual maturity at the age of three. Both robust redhorse and spotted sucker were sexually dimorphic in size-at-age. The range of life history strategies employed by Savannah River catostomids encompasses the range of life history strategies exhibited within the family as a whole. ?? 2007 Blackwell Munksgaard.

  8. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  9. Ranking of stopping criteria for log domain diffeomorphic demons application in clinical radiation therapy.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2011-01-01

    Deformable Image Registration is a complex optimization algorithm with the goal of modeling a non-rigid transformation between two images. A crucial issue in this field is guaranteeing the user a robust but computationally reasonable algorithm. We rank the performances of four stopping criteria and six stopping value computation strategies for a log domain deformable registration. The stopping criteria we test are: (a) velocity field update magnitude, (b) vector field Jacobian, (c) mean squared error, and (d) harmonic energy. Experiments demonstrate that comparing the metric value over the last three iterations with the metric minimum of between four and six previous iterations is a robust and appropriate strategy. The harmonic energy and vector field update magnitude metrics give the best results in terms of robustness and speed of convergence.

  10. The effects of harvest on waterfowl populations

    USGS Publications Warehouse

    Cooch, Evan G.; Guillemain, Matthieu; Boomer, G Scott; Lebreton, Jean-Dominique; Nichols, James D.

    2014-01-01

    Overall, there is substantial uncertainty about system dynamics, about the impacts of potential management and conservation decisions on those dynamics, and how to optimise management decisions in the presence of such uncertainties. Such relationships are unlikely to be stationary over space or time, and selective harvest of some individuals can potentially alter life history allocation of resources over time – both of which will potentially influence optimal harvest strategies. These sources of variation and uncertainty argue for the use of adaptive approaches to waterfowl harvest management.

  11. The thinking doctor: clinical decision making in contemporary medicine.

    PubMed

    Trimble, Michael; Hamilton, Paul

    2016-08-01

    Diagnostic errors are responsible for a significant number of adverse events. Logical reasoning and good decision-making skills are key factors in reducing such errors, but little emphasis has traditionally been placed on how these thought processes occur, and how errors could be minimised. In this article, we explore key cognitive ideas that underpin clinical decision making and suggest that by employing some simple strategies, physicians might be better able to understand how they make decisions and how the process might be optimised. © 2016 Royal College of Physicians.

  12. Multiobjective optimisation of bogie suspension to boost speed on curves

    NASA Astrophysics Data System (ADS)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  13. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    NASA Astrophysics Data System (ADS)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  14. Demystifying the Search Button

    PubMed Central

    McKeever, Liam; Nguyen, Van; Peterson, Sarah J.; Gomez-Perez, Sandra

    2015-01-01

    A thorough review of the literature is the basis of all research and evidence-based practice. A gold-standard efficient and exhaustive search strategy is needed to ensure all relevant citations have been captured and that the search performed is reproducible. The PubMed database comprises both the MEDLINE and non-MEDLINE databases. MEDLINE-based search strategies are robust but capture only 89% of the total available citations in PubMed. The remaining 11% include the most recent and possibly relevant citations but are only searchable through less efficient techniques. An effective search strategy must employ both the MEDLINE and the non-MEDLINE portion of PubMed to ensure all studies have been identified. The robust MEDLINE search strategies are used for the MEDLINE portion of the search. Usage of the less robust strategies is then efficiently confined to search only the remaining 11% of PubMed citations that have not been indexed for MEDLINE. The current article offers step-by-step instructions for building such a search exploring methods for the discovery of medical subject heading (MeSH) terms to search MEDLINE, text-based methods for exploring the non-MEDLINE database, information on the limitations of convenience algorithms such as the “related citations feature,” the strengths and pitfalls associated with commonly used filters, the proper usage of Boolean operators to organize a master search strategy, and instructions for automating that search through “MyNCBI” to receive search query updates by email as new citations become available. PMID:26129895

  15. Optimisation of insect cell growth in deep-well blocks: development of a high-throughput insect cell expression screen.

    PubMed

    Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian

    2005-01-01

    This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.

  16. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  17. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  18. Field spectroscopy sampling strategies for improved measurement of Earth surface reflectance

    NASA Astrophysics Data System (ADS)

    Mac Arthur, A.; Alonso, L.; Malthus, T. J.; Moreno, J. F.

    2013-12-01

    Over the last two decades extensive networks of research sites have been established to measure the flux of carbon compounds and water vapour between the Earth's surface and the atmosphere using eddy covariance (EC) techniques. However, contributing Earth surface components cannot be determined and (as the ';footprints' are spatially constrained) these measurements cannot be extrapolated to regional cover using this technique. At many of these EC sites researchers have been integrating spectral measurements with EC and ancillary data to better understand light use efficiency and carbon dioxide flux. These spectroscopic measurements could also be used to assess contributing components and provide support for imaging spectroscopy, from airborne or satellite platforms, which can provide unconstrained spatial cover. Furthermore, there is an increasing interest in ';smart' database and information retrieval systems such as that proposed by EcoSIS and OPTIMISE to store, analyse, QA and merge spectral and biophysical measurements and provide information to end users. However, as Earth surfaces are spectrally heterogeneous and imaging and field spectrometers sample different spatial extents appropriate field sampling strategies require to be adopted. To sample Earth surfaces spectroscopists adopt either single; random; regular grid; transect; or 'swiping' point sampling strategies, although little comparative work has been carried out to determine the most appropriate approach; the work by Goetz (2012) is a limited exception. Mac Arthur et al (2012) demonstrated that, for two full wavelength (400 nm to 2,500 nm) field spectroradiometers, the measurement area sampled is defined by each spectroradiometer/fore optic system's directional response function (DRF) rather than the field-of-view (FOV) specified by instrument manufacturers. Mac Arthur et al (2012) also demonstrated that each reflecting element within the sampled area was not weighted equally in the integrated measurement recorded. There were non-uniformities of spectral response with the spectral ';weighting' per wavelength interval being positionally dependent and unique to each spectroradiometer/fore optic system investigated. However, Mac Arthur et al (2012) did not provide any advice on how to compensate for these systematic errors or advise on appropriate sampling strategies. The work reported here will provide the first systematic study of the effect of field spectroscopy sampling strategies for a range of different Earth surface types. Synthetic Earth surface hyperspectral data cubes for each surface type were generated and convolved with a range of the spectrometer/fore optic system directional response functions generated by Mac Arthur et al 2013, to simulate spectroscopic measurements of Earth surfaces. This has enabled different field sampling strategies to be directly compared and their suitability for each measurement purpose and surface type to be assessed and robust field spectroscopy sampling strategy recommendations to be made. This will be particularly of interest to the carbon and water vapour flux communities and assist the development of sampling strategies for field spectroscopy from rotary-wing Unmanned Aerial Vehicles, which will aid acquiring measurements in the spatial domain, and generally further the use of field spectroscopy for quantitative Earth observation.

  19. Robust Derivation of Risk Reduction Strategies

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Port, Daniel; Feather, Martin

    2007-01-01

    Effective risk reduction strategies can be derived mechanically given sufficient characterization of the risks present in the system and the effectiveness of available risk reduction techniques. In this paper, we address an important question: can we reliably expect mechanically derived risk reduction strategies to be better than fixed or hand-selected risk reduction strategies, given that the quantitative assessment of risks and risk reduction techniques upon which mechanical derivation is based is difficult and likely to be inaccurate? We consider this question relative to two methods for deriving effective risk reduction strategies: the strategic method defined by Kazman, Port et al [Port et al, 2005], and the Defect Detection and Prevention (DDP) tool [Feather & Cornford, 2003]. We performed a number of sensitivity experiments to evaluate how inaccurate knowledge of risk and risk reduction techniques affect the performance of the strategies computed by the Strategic Method compared to a variety of alternative strategies. The experimental results indicate that strategies computed by the Strategic Method were significantly more effective than the alternative risk reduction strategies, even when knowledge of risk and risk reduction techniques was very inaccurate. The robustness of the Strategic Method suggests that its use should be considered in a wide range of projects.

  20. Real time control of a combined sewer system using radar-measured precipitation--results of the pilot study.

    PubMed

    Petruck, A; Holtmeier, E; Redder, A; Teichgräber, B

    2003-01-01

    Emschergenossenschaft and Lippeverband have developed a method to use radar-measured precipitation as an input for a real-time control of a combined sewer system containing several overflow structures. Two real-time control strategies have been developed and tested, one is solely volume-based, the other is volume and pollution-based. The system has been implemented in a pilot study in Gelsenkirchen, Germany. During the project the system was optimised and is now in constant operation. It was found, that the volume of combined sewage overflow could be reduced by 5 per cent per year. This was also found in simulations carried out in similar catchment areas. Most of the potential of improvement can already be achieved by local pollution-based control strategies.

  1. A robust preference for cheap-and-easy strategies over reliable strategies when verifying personal memories.

    PubMed

    Nash, Robert A; Wade, Kimberley A; Garry, Maryanne; Adelman, James S

    2017-08-01

    People depend on various sources of information when trying to verify their autobiographical memories. Yet recent research shows that people prefer to use cheap-and-easy verification strategies, even when these strategies are not reliable. We examined the robustness of this cheap strategy bias, with scenarios designed to encourage greater emphasis on source reliability. In three experiments, subjects described real (Experiments 1 and 2) or hypothetical (Experiment 3) autobiographical events, and proposed strategies they might use to verify their memories of those events. Subjects also rated the reliability, cost, and the likelihood that they would use each strategy. In line with previous work, we found that the preference for cheap information held when people described how they would verify childhood or recent memories (Experiment 1), personally important or trivial memories (Experiment 2), and even when the consequences of relying on incorrect information could be significant (Experiment 3). Taken together, our findings fit with an account of source monitoring in which the tendency to trust one's own autobiographical memories can discourage people from systematically testing or accepting strong disconfirmatory evidence.

  2. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  3. On the role of budget sufficiency, cost efficiency, and uncertainty in species management

    USGS Publications Warehouse

    van der Burg, Max Post; Bly, Bartholomew B.; Vercauteren, Tammy; Grand, James B.; Tyre, Andrew J.

    2014-01-01

    Many conservation planning frameworks rely on the assumption that one should prioritize locations for management actions based on the highest predicted conservation value (i.e., abundance, occupancy). This strategy may underperform relative to the expected outcome if one is working with a limited budget or the predicted responses are uncertain. Yet, cost and tolerance to uncertainty rarely become part of species management plans. We used field data and predictive models to simulate a decision problem involving western burrowing owls (Athene cunicularia hypugaea) using prairie dog colonies (Cynomys ludovicianus) in western Nebraska. We considered 2 species management strategies: one maximized abundance and the other maximized abundance in a cost-efficient way. We then used heuristic decision algorithms to compare the 2 strategies in terms of how well they met a hypothetical conservation objective. Finally, we performed an info-gap decision analysis to determine how these strategies performed under different budget constraints and uncertainty about owl response. Our results suggested that when budgets were sufficient to manage all sites, the maximizing strategy was optimal and suggested investing more in expensive actions. This pattern persisted for restricted budgets up to approximately 50% of the sufficient budget. Below this budget, the cost-efficient strategy was optimal and suggested investing in cheaper actions. When uncertainty in the expected responses was introduced, the strategy that maximized abundance remained robust under a sufficient budget. Reducing the budget induced a slight trade-off between expected performance and robustness, which suggested that the most robust strategy depended both on one's budget and tolerance to uncertainty. Our results suggest that wildlife managers should explicitly account for budget limitations and be realistic about their expected levels of performance.

  4. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  5. The Effect of Self-Explaining on Robust Learning

    ERIC Educational Resources Information Center

    Hausmann, Robert G. M.; VanLehn, Kurt

    2010-01-01

    Self-explaining is a domain-independent learning strategy that generally leads to a robust understanding of the domain material. However, there are two potential explanations for its effectiveness. First, self-explanation generates additional "content" that does not exist in the instructional materials. Second, when compared to…

  6. Mechanically robust and transparent N-halamine grafted PVA-co-PE films with renewable antimicrobial activity

    USDA-ARS?s Scientific Manuscript database

    Antimicrobial polymeric films that are both mechanically robust and function renewable would have broad technological implications for areas ranging from medical safety and bioengineering to foods industry; however, creating such materials has proven extremely challenging. Here, a novel strategy is ...

  7. Development and single-laboratory validation of a UHPLC-MS/MS method for quantitation of microcystins and nodularin in natural water, cyanobacteria, shellfish and algal supplement tablet powders.

    PubMed

    Turner, Andrew D; Waack, Julia; Lewis, Adam; Edwards, Christine; Lawton, Linda

    2018-02-01

    A simple, rapid UHPLC-MS/MS method has been developed and optimised for the quantitation of microcystins and nodularin in wide variety of sample matrices. Microcystin analogues targeted were MC-LR, MC-RR, MC-LA, MC-LY, MC-LF, LC-LW, MC-YR, MC-WR, [Asp3] MC-LR, [Dha7] MC-LR, MC-HilR and MC-HtyR. Optimisation studies were conducted to develop a simple, quick and efficient extraction protocol without the need for complex pre-analysis concentration procedures, together with a rapid sub 5min chromatographic separation of toxins in shellfish and algal supplement tablet powders, as well as water and cyanobacterial bloom samples. Validation studies were undertaken on each matrix-analyte combination to the full method performance characteristics following international guidelines. The method was found to be specific and linear over the full calibration range. Method sensitivity in terms of limits of detection, quantitation and reporting were found to be significantly improved in comparison to LC-UV methods and applicable to the analysis of each of the four matrices. Overall, acceptable recoveries were determined for each of the matrices studied, with associated precision and within-laboratory reproducibility well within expected guidance limits. Results from the formalised ruggedness analysis of all available cyanotoxins, showed that the method was robust for all parameters investigated. The results presented here show that the optimised LC-MS/MS method for cyanotoxins is fit for the purpose of detection and quantitation of a range of microcystins and nodularin in shellfish, algal supplement tablet powder, water and cyanobacteria. The method provides a valuable early warning tool for the rapid, routine extraction and analysis of natural waters, cyanobacterial blooms, algal powders, food supplements and shellfish tissues, enabling monitoring labs to supplement traditional microscopy techniques and report toxicity results within a short timeframe of sample receipt. The new method, now accredited to ISO17025 standard, is simple, quick, applicable to multiple matrices and is highly suitable for use as a routine, high-throughout, fast turnaround regulatory monitoring tool. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Study protocol for the optimisation, feasibility testing and pilot cluster randomised trial of Positive Choices: a school-based social marketing intervention to promote sexual health, prevent unintended teenage pregnancies and address health inequalities in England.

    PubMed

    Ponsford, Ruth; Allen, Elizabeth; Campbell, Rona; Elbourne, Diana; Hadley, Alison; Lohan, Maria; Melendez-Torres, G J; Mercer, Catherine H; Morris, Steve; Young, Honor; Bonell, Chris

    2018-01-01

    Since the introduction of the Teenage Pregnancy Strategy (TPS), England's under-18 conception rate has fallen by 55%, but a continued focus on prevention is needed to maintain and accelerate progress. The teenage birth rate remains higher in the UK than comparable Western European countries. Previous trials indicate that school-based social marketing interventions are a promising approach to addressing teenage pregnancy and improving sexual health. Such interventions are yet to be trialled in the UK. This study aims to optimise and establish the feasibility and acceptability of one such intervention: Positive Choices. Design: Optimisation, feasibility testing and pilot cluster randomised trial.Interventions: The Positive Choices intervention comprises a student needs survey, a student/staff led School Health Promotion Council (SHPC), a classroom curriculum for year nine students covering social and emotional skills and sex education, student-led social marketing activities, parent information and a review of school sexual health services.Systematic optimisation of Positive Choices will be carried out with the National Children's Bureau Sex Education Forum (NCB SEF), one state secondary school in England and other youth and policy stakeholders.Feasibility testing will involve the same state secondary school and will assess progression criteria to advance to the pilot cluster RCT.Pilot cluster RCT with integral process evaluation will involve six different state secondary schools (four interventions and two controls) and will assess the feasibility and utility of progressing to a full effectiveness trial.The following outcome measures will be trialled as part of the pilot:Self-reported pregnancy and unintended pregnancy (initiation of pregnancy for boys) and sexually transmitted infections,Age of sexual debut, number of sexual partners, use of contraception at first and last sex and non-volitional sexEducational attainmentThe feasibility of linking administrative data on births and termination to self-report survey data to measure our primary outcome (unintended teenage pregnancy) will also be tested. This will be the first UK-based pilot trial of a school-wide social marketing intervention to reduce unintended teenage pregnancy and improve sexual health. If this study indicates feasibility and acceptability of the optimised Positive Choices intervention in English secondary schools, plans will be initiated for a phase III trial and economic evaluation of the intervention. ISRCTN registry (ISCTN12524938. Registered 03/07/2017).

  9. Latin American dose survey results in mammography studies under IAEA programme: radiological protection of patients in medical exposures (TSA3).

    PubMed

    Mora, Patricia; Blanco, Susana; Khoury, Helen; Leyton, Fernando; Cárdenas, Juan; Defaz, María Yolanda; Garay, Fernando; Telón, Flaviano; Aguilar, Juan Garcia; Roas, Norma; Gamarra, Mirtha; Blanco, Daniel; Quintero, Ana Rosa; Nader, Alejandro

    2015-03-01

    Latin American countries (Argentina, Brazil, Chile, Costa Rica, Cuba, Ecuador, El Salvador, Guatemala, Mexico, Nicaragua, Paraguay, Uruguay and Venezuela) working under the International Atomic Energy Agency (IAEA) Technical Cooperation Programme: TSA3 Radiological Protection of Patients in Medical Exposures have joined efforts in the optimisation of radiation protection in mammography practice. Through surveys of patient doses, the region has a unique database of diagnostic reference levels for analogue and digital equipment that will direct future optimisation activities towards the early detection of breast cancer among asymptomatic women. During RLA9/057 (2007-09) 24 institutions participated with analogue equipment in a dose survey. Regional training on methodology and measurement equipment was addressed in May 2007. The mean glandular dose (DG) was estimated using the incident kerma in air and relevant conversion coefficients for both projections craneo caudal and mediolateral oblique (CC and MLO). For Phase 2, RLA9/067 (2010-11), it was decided to include also digital systems in order to see their impact in future dose optimisation activities. Any new country that joined the project received training in the activities through IAEA expert missions. Twenty-nine new institutions participated (9 analogue and 20 digital equipment). A total of 2262 patient doses were collected during this study and from them D(G) (mGy) for both projections were estimated for each institution and country. Regional results (75 percentile in mGy) show for CC and MLO views, respectively: RLA9/057 (analogue) 2.63 and 3.17; RLA/067: 2.57 and 3.15 (analogue) and 2.69 and 2.90 (digital). Regarding only digital equipment for CC and MLO, respectively, computed radiography systems showed 2.59 and 2.78 and direct digital radiography (DDR) systems 2.78 and 3.04. Based on the IAEA Basic Safety Standard (BSS) reference dose (3 mGy), it can be observed that there is enough room to start optimisation processes in Latin America (LA); several countries or even particular institutions have values much higher than the 3 mGy. The main issues to address are lack of well-established quality assurance programmes for mammography, not enough medical physicists with training in mammography, an increase in patient doses with the introduction of digital equipment and to create awareness on radiation risk and optimisation strategies. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. A Robust, Water-Based, Functional Binder Framework for High-Energy Lithium-Sulfur Batteries.

    PubMed

    Lacey, Matthew J; Österlund, Viking; Bergfelt, Andreas; Jeschull, Fabian; Bowden, Tim; Brandell, Daniel

    2017-07-10

    We report here a water-based functional binder framework for the lithium-sulfur battery systems, based on the general combination of a polyether and an amide-containing polymer. These binders are applied to positive electrodes optimised towards high-energy electrochemical performance based only on commercially available materials. Electrodes with up to 4 mAh cm -2 capacity and 97-98 % coulombic efficiency are achievable in electrodes with a 65 % total sulfur content and a poly(ethylene oxide):poly(vinylpyrrolidone) (PEO:PVP) binder system. Exchange of either binder component for a different polymer with similar functionality preserves the high capacity and coulombic efficiency. The improvement in coulombic efficiency from the inclusion of the coordinating amide group was also observed in electrodes where pyrrolidone moieties were covalently grafted to the carbon black, indicating the role of this functionality in facilitating polysulfide adsorption to the electrode surface. The mechanical properties of the electrodes appear not to significantly influence sulfur utilisation or coulombic efficiency in the short term but rather determine retention of these properties over extended cycling. These results demonstrate the robustness of this very straightforward approach, as well as the considerable scope for designing binder materials with targeted properties. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. 3D Polyaniline Architecture by Concurrent Inorganic and Organic Acid Doping for Superior and Robust High Rate Supercapacitor Performance

    NASA Astrophysics Data System (ADS)

    Gawli, Yogesh; Banerjee, Abhik; Dhakras, Dipti; Deo, Meenal; Bulani, Dinesh; Wadgaonkar, Prakash; Shelke, Manjusha; Ogale, Satishchandra

    2016-02-01

    A good high rate supercapacitor performance requires a fine control of morphological (surface area and pore size distribution) and electrical properties of the electrode materials. Polyaniline (PANI) is an interesting material in supercapacitor context because it stores energy Faradaically. However in conventional inorganic (e.g. HCl) acid doping, the conductivity is high but the morphological features are undesirable. On the other hand, in weak organic acid (e.g. phytic acid) doping, interesting and desirable 3D connected morphological features are attained but the conductivity is poorer. Here the synergy of the positive quality factors of these two acid doping approaches is realized by concurrent and optimized strong-inorganic (HCl) and weak-organic (phytic) acid doping, resulting in a molecular composite material that renders impressive and robust supercapacitor performance. Thus, a nearly constant high specific capacitance of 350 F g-1 is realized for the optimised case of binary doping over the entire range of 1 A g-1 to 40 A g-1 with stability of 500 cycles at 40 A g-1. Frequency dependant conductivity measurements show that the optimized co-doped case is more metallic than separately doped materials. This transport property emanates from the unique 3D single molecular character of such system.

  12. 3D Polyaniline Architecture by Concurrent Inorganic and Organic Acid Doping for Superior and Robust High Rate Supercapacitor Performance.

    PubMed

    Gawli, Yogesh; Banerjee, Abhik; Dhakras, Dipti; Deo, Meenal; Bulani, Dinesh; Wadgaonkar, Prakash; Shelke, Manjusha; Ogale, Satishchandra

    2016-02-12

    A good high rate supercapacitor performance requires a fine control of morphological (surface area and pore size distribution) and electrical properties of the electrode materials. Polyaniline (PANI) is an interesting material in supercapacitor context because it stores energy Faradaically. However in conventional inorganic (e.g. HCl) acid doping, the conductivity is high but the morphological features are undesirable. On the other hand, in weak organic acid (e.g. phytic acid) doping, interesting and desirable 3D connected morphological features are attained but the conductivity is poorer. Here the synergy of the positive quality factors of these two acid doping approaches is realized by concurrent and optimized strong-inorganic (HCl) and weak-organic (phytic) acid doping, resulting in a molecular composite material that renders impressive and robust supercapacitor performance. Thus, a nearly constant high specific capacitance of 350 F g(-1) is realized for the optimised case of binary doping over the entire range of 1 A g(-1) to 40 A g(-1) with stability of 500 cycles at 40 A g(-1). Frequency dependant conductivity measurements show that the optimized co-doped case is more metallic than separately doped materials. This transport property emanates from the unique 3D single molecular character of such system.

  13. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  14. Reducing regional vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    NASA Astrophysics Data System (ADS)

    Reed, Patrick; Trindade, Bernardo; Jonathan, Herman; Harrison, Zeff; Gregory, Characklis

    2016-04-01

    Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  15. Reducing regional vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.

    2015-12-01

    Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as of the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management should be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less

  17. On the design and optimisation of new fractal antenna using PSO

    NASA Astrophysics Data System (ADS)

    Rani, Shweta; Singh, A. P.

    2013-10-01

    An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.

  18. Towards designing robust coupled networks

    NASA Astrophysics Data System (ADS)

    Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.

    2013-06-01

    Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy.

  19. Brain limbic system-based intelligent controller application to lane change manoeuvre

    NASA Astrophysics Data System (ADS)

    Kim, Changwon; Langari, Reza

    2011-12-01

    This paper presents the application of a novel neuromorphic control strategy for lane change manoeuvres in the highway environment. The lateral dynamics of a vehicle with and without wind disturbance are derived and utilised to implement a control strategy based on the brain limbic system. To show the robustness of the proposed controller, several disturbance conditions including wind, uncertainty in the cornering stiffness, and changes in the vehicle mass are investigated. To demonstrate the performance of the suggested strategy, simulation results of the proposed method are compared with the human driver model-based control scheme, which has been discussed in the literature. The simulation results demonstrate the superiority of the proposed controller in energy efficiency, driving comfort, and robustness.

  20. Birds achieve high robustness in uneven terrain through active control of landing conditions.

    PubMed

    Birn-Jeffery, Aleksandra V; Daley, Monica A

    2012-06-15

    We understand little about how animals adjust locomotor behaviour to negotiate uneven terrain. The mechanical demands and constraints of such behaviours likely differ from uniform terrain locomotion. Here we investigated how common pheasants negotiate visible obstacles with heights from 10 to 50% of leg length. Our goal was to determine the neuro-mechanical strategies used to achieve robust stability, and address whether strategies vary with obstacle height. We found that control of landing conditions was crucial for minimising fluctuations in stance leg loading and work in uneven terrain. Variation in touchdown leg angle (θ(TD)) was correlated with the orientation of ground force during stance, and the angle between the leg and body velocity vector at touchdown (β(TD)) was correlated with net limb work. Pheasants actively targeted obstacles to control body velocity and leg posture at touchdown to achieve nearly steady dynamics on the obstacle step. In the approach step to an obstacle, the birds produced net positive limb work to launch themselves upward. On the obstacle, body dynamics were similar to uniform terrain. Pheasants also increased swing leg retraction velocity during obstacle negotiation, which we suggest is an active strategy to minimise fluctuations in peak force and leg posture in uneven terrain. Thus, pheasants appear to achieve robustly stable locomotion through a combination of path planning using visual feedback and active adjustment of leg swing dynamics to control landing conditions. We suggest that strategies for robust stability are context specific, depending on the quality of sensory feedback available, especially visual input.

  1. Robust network design for multispecies conservation

    Treesearch

    Ronan Le Bras; Bistra Dilkina; Yexiang Xue; Carla P. Gomes; Kevin S. McKelvey; Michael K. Schwartz; Claire A. Montgomery

    2013-01-01

    Our work is motivated by an important network design application in computational sustainability concerning wildlife conservation. In the face of human development and climate change, it is important that conservation plans for protecting landscape connectivity exhibit certain level of robustness. While previous work has focused on conservation strategies that result...

  2. Predictability and Robustness in the Manipulation of Dynamically Complex Objects

    PubMed Central

    Hasson, Christopher J.

    2017-01-01

    Manipulation of complex objects and tools is a hallmark of many activities of daily living, but how the human neuromotor control system interacts with such objects is not well understood. Even the seemingly simple task of transporting a cup of coffee without spilling creates complex interaction forces that humans need to compensate for. Predicting the behavior of an underactuated object with nonlinear fluid dynamics based on an internal model appears daunting. Hence, this research tests the hypothesis that humans learn strategies that make interactions predictable and robust to inaccuracies in neural representations of object dynamics. The task of moving a cup of coffee is modeled with a cart-and-pendulum system that is rendered in a virtual environment, where subjects interact with a virtual cup with a rolling ball inside using a robotic manipulandum. To gain insight into human control strategies, we operationalize predictability and robustness to permit quantitative theory-based assessment. Predictability is quantified by the mutual information between the applied force and the object dynamics; robustness is quantified by the energy margin away from failure. Three studies are reviewed that show how with practice subjects develop movement strategies that are predictable and robust. Alternative criteria, common for free movement, such as maximization of smoothness and minimization of force, do not account for the observed data. As manual dexterity is compromised in many individuals with neurological disorders, the experimental paradigm and its analyses are a promising platform to gain insights into neurological diseases, such as dystonia and multiple sclerosis, as well as healthy aging. PMID:28035560

  3. Assessing intern handover processes.

    PubMed

    Habicht, Robert; Block, Lauren; Silva, Kathryn Novello; Oliver, Nora; Wu, Albert; Feldman, Leonard

    2016-06-01

    New standards for resident work hours set in 2011 changed the landscape of patient care in teaching hospitals, and resulted in new challenges for US residency training programmes to overcome. One such challenge was a dramatic increase in the number of patient handovers performed by residents. As a result, there is a renewed focus for clinical teachers to develop educational strategies to optimise the patient handover process and improve the quality of patient care and safety. In order to investigate current gaps in resident handovers, we examined the handover processes performed by medicine interns at two academic medical centres in Baltimore, Maryland, USA. We used trained observers to collect data on whether handovers were conducted face to face, with questions asked, in private locations, with written documentation, and without distractions or interruptions. Results were analysed using chi-square tests, and adjusted for clustering at the observer and intern levels. Interns successfully conducted handovers face to face (99.5%), asked questions (85.3%), used private locations (91%), included written handover documentation (95.8%) and did not experience distractions for the majority of the time (87.7%); however, interruptions were pervasive, occurring 41.3 per cent of the time. In order to investigate current gaps in resident handovers, we examined the handover processes performed by medicine interns Interns conducted patient handovers face to face, with questions asked, in private locations, with written documentation and without distractions the majority of the time; however, interruptions during the handover process were common. Exploring gaps at the individual programme level is a critical first step to develop effective teaching strategies to optimise handovers in residency. © 2015 John Wiley & Sons Ltd.

  4. ICRP Publication 111 - Application of the Commission's recommendations to the protection of people living in long-term contaminated areas after a nuclear accident or a radiation emergency.

    PubMed

    Lochard, J; Bogdevitch, I; Gallego, E; Hedemann-Jensen, P; McEwan, A; Nisbet, A; Oudiz, A; Oudiz, T; Strand, P; Janssens, A; Lazo, T; Carr, Z; Sugier, A; Burns, P; Carboneras, P; Cool, D; Cooper, J; Kai, M; Lecomte, J-F; Liu, H; Massera, G; McGarry, A; Mrabit, K; Mrabit, M; Sjöblom, K-L; Tsela, A; Weiss, W

    2009-06-01

    In this report, the Commission provides guidance for the protection of people living in long-term contaminated areas resulting from either a nuclear accident or a radiation emergency. The report considers the effects of such events on the affected population. This includes the pathways of human exposure, the types of exposed populations, and the characteristics of exposures. Although the focus is on radiation protection considerations, the report also recognises the complexity of post-accident situations, which cannot be managed without addressing all the affected domains of daily life, i.e. environmental, health, economic, social, psychological, cultural, ethical, political, etc. The report explains how the 2007 Recommendations apply to this type of existing exposure situation, including consideration of the justification and optimisation of protection strategies, and the introduction and application of a reference level to drive the optimisation process. The report also considers practical aspects of the implementation of protection strategies, both by authorities and the affected population. It emphasises the effectiveness of directly involving the affected population and local professionals in the management of the situation, and the responsibility of authorities at both national and local levels to create the conditions and provide the means favouring the involvement and empowerment of the population. The role of radiation monitoring, health surveillance, and the management of contaminated foodstuffs and other commodities is described in this perspective. The Annex summarises past experience of longterm contaminated areas resulting from radiation emergencies and nuclear accidents, including radiological criteria followed in carrying out remediation measures.

  5. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    PubMed

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  6. Doing Good Again? A Multilevel Institutional Perspective on Corporate Environmental Responsibility and Philanthropic Strategy.

    PubMed

    Liu, Wei; Wei, Qiao; Huang, Song-Qin; Tsai, Sang-Bing

    2017-10-24

    This study investigates the relationship between corporate environmental responsibility and corporate philanthropy. Using a sample of Chinese listed firms from 2008 to 2013, this paper examines the role of corporate environmental responsibility in corporate philanthropy and the moderating influence of the institutional environment using multilevel analysis. The results show that corporate eco-friendly events are positively associated with corporate philanthropic strategy to a significant degree. Provincial-level government intervention positively moderate the positive relationship between eco-friendly events and corporate philanthropy and government corruption is negatively moderate the relationship. All these results are robust according to robustness checks. These findings provide a new perspective on corporate philanthropic strategy as a means to obtain critical resources from the government in order to compensate for the loss made on environmental responsibility. Moreover, the institutional environment is proved here to play an important role in corporate philanthropic strategy.

  7. Stabilization strategies of a general nonlinear car-following model with varying reaction-time delay of the drivers.

    PubMed

    Li, Shukai; Yang, Lixing; Gao, Ziyou; Li, Keping

    2014-11-01

    In this paper, the stabilization strategies of a general nonlinear car-following model with reaction-time delay of the drivers are investigated. The reaction-time delay of the driver is time varying and bounded. By using the Lyapunov stability theory, the sufficient condition for the existence of the state feedback control strategy for the stability of the car-following model is given in the form of linear matrix inequality, under which the traffic jam can be well suppressed with respect to the varying reaction-time delay. Moreover, by considering the external disturbance for the running cars, the robust state feedback control strategy is designed, which ensures robust stability and a smaller prescribed H∞ disturbance attenuation level for the traffic flow. Numerical examples are given to illustrate the effectiveness of the proposed methods. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Doing Good Again? A Multilevel Institutional Perspective on Corporate Environmental Responsibility and Philanthropic Strategy

    PubMed Central

    Liu, Wei; Wei, Qiao; Huang, Song-Qin

    2017-01-01

    This study investigates the relationship between corporate environmental responsibility and corporate philanthropy. Using a sample of Chinese listed firms from 2008 to 2013, this paper examines the role of corporate environmental responsibility in corporate philanthropy and the moderating influence of the institutional environment using multilevel analysis. The results show that corporate eco-friendly events are positively associated with corporate philanthropic strategy to a significant degree. Provincial-level government intervention positively moderate the positive relationship between eco-friendly events and corporate philanthropy and government corruption is negatively moderate the relationship. All these results are robust according to robustness checks. These findings provide a new perspective on corporate philanthropic strategy as a means to obtain critical resources from the government in order to compensate for the loss made on environmental responsibility. Moreover, the institutional environment is proved here to play an important role in corporate philanthropic strategy. PMID:29064451

  9. Decentralized Control of Sound Radiation using a High-Authority/Low-Authority Control Strategy with Anisotropic Actuators

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    This paper describes a combined control strategy designed to reduce sound radiation from stiffened aircraft-style panels. The control architecture uses robust active damping in addition to high-authority linear quadratic Gaussian (LQG) control. Active damping is achieved using direct velocity feedback with triangularly shaped anisotropic actuators and point velocity sensors. While active damping is simple and robust, stability is guaranteed at the expense of performance. Therefore the approach is often referred to as low-authority control. In contrast, LQG control strategies can achieve substantial reductions in sound radiation. Unfortunately, the unmodeled interaction between neighboring control units can destabilize decentralized control systems. Numerical simulations show that combining active damping and decentralized LQG control can be beneficial. In particular, augmenting the in-bandwidth damping supplements the performance of the LQG control strategy and reduces the destabilizing interaction between neighboring control units.

  10. A robust high resolution reversed-phase HPLC strategy to investigate various metabolic species in different biological models.

    PubMed

    D'Alessandro, Angelo; Gevi, Federica; Zolla, Lello

    2011-04-01

    Recent advancements in the field of omics sciences have paved the way for further expansion of metabolomics. Originally tied to NMR spectroscopy, metabolomic disciplines are constantly and growingly involving HPLC and mass spectrometry (MS)-based analytical strategies and, in this context, we hereby propose a robust and efficient extraction protocol for metabolites from four different biological sources which are subsequently analysed, identified and quantified through high resolution reversed-phase fast HPLC and mass spectrometry. To this end, we demonstrate the elevated intra- and inter-day technical reproducibility, ease of an MRM-based MS method, allowing simultaneous detection of up to 10 distinct features, and robustness of multiple metabolite detection and quantification in four different biological samples. This strategy might become routinely applicable to various samples/biological matrices, especially for low-availability ones. In parallel, we compare the present strategy for targeted detection of a representative metabolite, L-glutamic acid, with our previously-proposed chemical-derivatization through dansyl chloride. A direct comparison of the present method against spectrophotometric assays is proposed as well. An application of the proposed method is also introduced, using the SAOS-2 cell line, either induced or non-induced to express the TAp63 isoform of the p63 gene, as a model for determination of variations of glutamate concentrations.

  11. Robust H∞ control of active vehicle suspension under non-stationary running

    NASA Astrophysics Data System (ADS)

    Guo, Li-Xin; Zhang, Li-Ping

    2012-12-01

    Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.

  12. Iris Matching Based on Personalized Weight Map.

    PubMed

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu

    2011-09-01

    Iris recognition typically involves three steps, namely, iris image preprocessing, feature extraction, and feature matching. The first two steps of iris recognition have been well studied, but the last step is less addressed. Each human iris has its unique visual pattern and local image features also vary from region to region, which leads to significant differences in robustness and distinctiveness among the feature codes derived from different iris regions. However, most state-of-the-art iris recognition methods use a uniform matching strategy, where features extracted from different regions of the same person or the same region for different individuals are considered to be equally important. This paper proposes a personalized iris matching strategy using a class-specific weight map learned from the training images of the same iris class. The weight map can be updated online during the iris recognition procedure when the successfully recognized iris images are regarded as the new training data. The weight map reflects the robustness of an encoding algorithm on different iris regions by assigning an appropriate weight to each feature code for iris matching. Such a weight map trained by sufficient iris templates is convergent and robust against various noise. Extensive and comprehensive experiments demonstrate that the proposed personalized iris matching strategy achieves much better iris recognition performance than uniform strategies, especially for poor quality iris images.

  13. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  14. Topology optimisation for natural convection problems

    NASA Astrophysics Data System (ADS)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-12-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  15. Induced seismicity and implications for CO2 storage risk

    NASA Astrophysics Data System (ADS)

    Gerstenberger, M. C.; Nicol, A.; Bromley, C.; Carne, R.; Chardot, L.; Ellis, S. M.; Jenkins, C.; Siggins, T.; Viskovic, P.

    2012-12-01

    We provide an overview of a recently completed report for the IEA GHG that represents a comprehensive review of current research and observations in induced seismicity, its risk to successful completion of Carbon Capture and Storage (CCS) projects and potential mitigation measures. We focus on two topics: a meta-analysis of related data from multiple injection projects around the globe and the implications of these data for CCS induced seismicity risk management. Published data have been compiled from injection and extraction projects around the globe to examine statistical relationships between possible controlling factors and induced seismicity. Quality control of such observational earthquake data sets is crucial to ensure robust results and issues with bias and completeness of the data set will be discussed. Analyses of the available data support previous suggestions that the locations, numbers and magnitudes of induced earthquakes are dependent on a range of factors, including the injection rate, total injected fluid volume, the reservoir permeability and the proximity of pre-existing faults. Increases in the injection rates and total volume of fluid injected, for example, typically raise reservoir pressures and increase the likelihood of elevated seismicity rates and maximum magnitudes of induced earthquakes. The risks associated with induced seismicity at CCS sites can be reduced and mitigated using a systematic and structured risk management programme. While precise forecasts of the expected induced seismicity may never be possible, a thorough risk management procedure should include some level of knowledge of the possible behaviour of induced seismicity. Risk management requires estimates of the expected magnitude, number, location and timing of potential induced earthquakes. Such forecasts should utilise site specific observations together with physical and statistical models that are optimised for the site. Statistical models presently show the most promise for forecasting induced seismicity after injection has commenced, however, with further development physical models could become key predictive tools. Combining forecasts with real-time monitoring of induced seismicity will be necessary to maintain an accurate picture of the seismicity and to allow for mitigation of the associated risks as they evolve. To optimise the utility of monitoring and mitigation programmes, site performance and management guidelines for the acceptable levels and impacts of induced seismicity together with key control measures should be established prior to injection. Such guidelines have been developed for Enhanced Geothermal Systems and should provide the starting point for a management strategy of induced seismicity at CCS sites.

  16. Optimising and communicating options for the control of invasive plant disease when there is epidemiological uncertainty.

    PubMed

    Cunniffe, Nik J; Stutt, Richard O J H; DeSimone, R Erik; Gottwald, Tim R; Gilligan, Christopher A

    2015-04-01

    Although local eradication is routinely attempted following introduction of disease into a new region, failure is commonplace. Epidemiological principles governing the design of successful control are not well-understood. We analyse factors underlying the effectiveness of reactive eradication of localised outbreaks of invading plant disease, using citrus canker in Florida as a case study, although our results are largely generic, and apply to other plant pathogens (as we show via our second case study, citrus greening). We demonstrate how to optimise control via removal of hosts surrounding detected infection (i.e. localised culling) using a spatially-explicit, stochastic epidemiological model. We show how to define optimal culling strategies that take account of stochasticity in disease spread, and how the effectiveness of disease control depends on epidemiological parameters determining pathogen infectivity, symptom emergence and spread, the initial level of infection, and the logistics and implementation of detection and control. We also consider how optimal culling strategies are conditioned on the levels of risk acceptance/aversion of decision makers, and show how to extend the analyses to account for potential larger-scale impacts of a small-scale outbreak. Control of local outbreaks by culling can be very effective, particularly when started quickly, but the optimum strategy and its performance are strongly dependent on epidemiological parameters (particularly those controlling dispersal and the extent of any cryptic infection, i.e. infectious hosts prior to symptoms), the logistics of detection and control, and the level of local and global risk that is deemed to be acceptable. A version of the model we developed to illustrate our methodology and results to an audience of stakeholders, including policy makers, regulators and growers, is available online as an interactive, user-friendly interface at http://www.webidemics.com/. This version of our model allows the complex epidemiological principles that underlie our results to be communicated to a non-specialist audience.

  17. Optimising and Communicating Options for the Control of Invasive Plant Disease When There Is Epidemiological Uncertainty

    PubMed Central

    Cunniffe, Nik J.; Stutt, Richard O. J. H.; DeSimone, R. Erik; Gottwald, Tim R.; Gilligan, Christopher A.

    2015-01-01

    Although local eradication is routinely attempted following introduction of disease into a new region, failure is commonplace. Epidemiological principles governing the design of successful control are not well-understood. We analyse factors underlying the effectiveness of reactive eradication of localised outbreaks of invading plant disease, using citrus canker in Florida as a case study, although our results are largely generic, and apply to other plant pathogens (as we show via our second case study, citrus greening). We demonstrate how to optimise control via removal of hosts surrounding detected infection (i.e. localised culling) using a spatially-explicit, stochastic epidemiological model. We show how to define optimal culling strategies that take account of stochasticity in disease spread, and how the effectiveness of disease control depends on epidemiological parameters determining pathogen infectivity, symptom emergence and spread, the initial level of infection, and the logistics and implementation of detection and control. We also consider how optimal culling strategies are conditioned on the levels of risk acceptance/aversion of decision makers, and show how to extend the analyses to account for potential larger-scale impacts of a small-scale outbreak. Control of local outbreaks by culling can be very effective, particularly when started quickly, but the optimum strategy and its performance are strongly dependent on epidemiological parameters (particularly those controlling dispersal and the extent of any cryptic infection, i.e. infectious hosts prior to symptoms), the logistics of detection and control, and the level of local and global risk that is deemed to be acceptable. A version of the model we developed to illustrate our methodology and results to an audience of stakeholders, including policy makers, regulators and growers, is available online as an interactive, user-friendly interface at http://www.webidemics.com/. This version of our model allows the complex epidemiological principles that underlie our results to be communicated to a non-specialist audience. PMID:25874622

  18. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.

  19. A Simple and Robust Method for Culturing Human-Induced Pluripotent Stem Cells in an Undifferentiated State Using Botulinum Hemagglutinin.

    PubMed

    Kim, Mee-Hae; Matsubara, Yoshifumi; Fujinaga, Yukako; Kino-Oka, Masahiro

    2018-02-01

    Clinical and industrial applications of human-induced pluripotent stem cells (hiPSCs) is hindered by the lack of robust culture strategies capable of sustaining a culture in an undifferentiated state. Here, a simple and robust hiPSC-culture-propagation strategy incorporating botulinum hemagglutinin (HA)-mediated selective removal of cells deviating from an undifferentiated state is developed. After HA treatment, cell-cell adhesion is disrupted, and deviated cells detached from the central region of the colony to subsequently form tight monolayer colonies following prolonged incubation. The authors find that the temporal and dose-dependent activity of HA regulated deviated-cell removal and recoverability after disruption of cell-cell adhesion in hiPSC colonies. The effects of HA are confirmed under all culture conditions examined, regardless of hiPSC line and feeder-dependent or -free culture conditions. After routine application of our HA-treatment paradigm for serial passages, hiPSCs maintains expression of pluripotent markers and readily forms embryoid bodies expressing markers for all three germ-cell layers. This method enables highly efficient culturing of hiPSCs and use of entire undifferentiated portions without having to pick deviated cells manually. This simple and readily reproducible culture strategy is a potentially useful tool for improving the robust and scalable maintenance of undifferentiated hiPSC cultures. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A robust and versatile signal-on fluorescence sensing strategy based on SYBR Green I dye and graphene oxide

    PubMed Central

    Qiu, Huazhang; Wu, Namei; Zheng, Yanjie; Chen, Min; Weng, Shaohuang; Chen, Yuanzhong; Lin, Xinhua

    2015-01-01

    A robust and versatile signal-on fluorescence sensing strategy was developed to provide label-free detection of various target analytes. The strategy used SYBR Green I dye and graphene oxide as signal reporter and signal-to-background ratio enhancer, respectively. Multidrug resistance protein 1 (MDR1) gene and mercury ion (Hg2+) were selected as target analytes to investigate the generality of the method. The linear relationship and specificity of the detections showed that the sensitive and selective analyses of target analytes could be achieved by the proposed strategy with low detection limits of 0.5 and 2.2 nM for MDR1 gene and Hg2+, respectively. Moreover, the strategy was used to detect real samples. Analytical results of MDR1 gene in the serum indicated that the developed method is a promising alternative approach for real applications in complex systems. Furthermore, the recovery of the proposed method for Hg2+ detection was acceptable. Thus, the developed label-free signal-on fluorescence sensing strategy exhibited excellent universality, sensitivity, and handling convenience. PMID:25565810

  1. Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.

    PubMed

    Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D

    2011-12-12

    We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America

  2. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  3. Ex vivo optimisation of a heterogeneous speed of sound model of the human skull for non-invasive transcranial focused ultrasound at 1 MHz.

    PubMed

    Marsac, L; Chauvet, D; La Greca, R; Boch, A-L; Chaumoitre, K; Tanter, M; Aubry, J-F

    2017-09-01

    Transcranial brain therapy has recently emerged as a non-invasive strategy for the treatment of various neurological diseases, such as essential tremor or neurogenic pain. However, treatments require millimetre-scale accuracy. The use of high frequencies (typically ≥1 MHz) decreases the ultrasonic wavelength to the millimetre scale, thereby increasing the clinical accuracy and lowering the probability of cavitation, which improves the safety of the technique compared with the use of low-frequency devices that operate at 220 kHz. Nevertheless, the skull produces greater distortions of high-frequency waves relative to low-frequency waves. High-frequency waves require high-performance adaptive focusing techniques, based on modelling the wave propagation through the skull. This study sought to optimise the acoustical modelling of the skull based on computed tomography (CT) for a 1 MHz clinical brain therapy system. The best model tested in this article corresponded to a maximum speed of sound of 4000 m.s -1 in the skull bone, and it restored 86% of the optimal pressure amplitude on average in a collection of six human skulls. Compared with uncorrected focusing, the optimised non-invasive correction led to an average increase of 99% in the maximum pressure amplitude around the target and an average decrease of 48% in the distance between the peak pressure and the selected target. The attenuation through the skulls was also assessed within the bandwidth of the transducers, and it was found to vary in the range of 10 ± 3 dB at 800 kHz and 16 ± 3 dB at 1.3 MHz.

  4. Antiretroviral Therapy Optimisation without Genotype Resistance Testing: A Perspective on Treatment History Based Models

    PubMed Central

    Prosperi, Mattia C. F.; Rosen-Zvi, Michal; Altmann, André; Zazzi, Maurizio; Di Giambenedetto, Simona; Kaiser, Rolf; Schülter, Eugen; Struck, Daniel; Sloot, Peter; van de Vijver, David A.; Vandamme, Anne-Mieke; Sönnerborg, Anders

    2010-01-01

    Background Although genotypic resistance testing (GRT) is recommended to guide combination antiretroviral therapy (cART), funding and/or facilities to perform GRT may not be available in low to middle income countries. Since treatment history (TH) impacts response to subsequent therapy, we investigated a set of statistical learning models to optimise cART in the absence of GRT information. Methods and Findings The EuResist database was used to extract 8-week and 24-week treatment change episodes (TCE) with GRT and additional clinical, demographic and TH information. Random Forest (RF) classification was used to predict 8- and 24-week success, defined as undetectable HIV-1 RNA, comparing nested models including (i) GRT+TH and (ii) TH without GRT, using multiple cross-validation and area under the receiver operating characteristic curve (AUC). Virological success was achieved in 68.2% and 68.0% of TCE at 8- and 24-weeks (n = 2,831 and 2,579), respectively. RF (i) and (ii) showed comparable performances, with an average (st.dev.) AUC 0.77 (0.031) vs. 0.757 (0.035) at 8-weeks, 0.834 (0.027) vs. 0.821 (0.025) at 24-weeks. Sensitivity analyses, carried out on a data subset that included antiretroviral regimens commonly used in low to middle income countries, confirmed our findings. Training on subtype B and validation on non-B isolates resulted in a decline of performance for models (i) and (ii). Conclusions Treatment history-based RF prediction models are comparable to GRT-based for classification of virological outcome. These results may be relevant for therapy optimisation in areas where availability of GRT is limited. Further investigations are required in order to account for different demographics, subtypes and different therapy switching strategies. PMID:21060792

  5. The STRATEGY project: decision tools to aid sustainable restoration and long-term management of contaminated agricultural ecosystems.

    PubMed

    Howard, B J; Beresford, N A; Nisbet, A; Cox, G; Oughton, D H; Hunt, J; Alvarez, B; Andersson, K G; Liland, A; Voigt, G

    2005-01-01

    The STRATEGY project (Sustainable Restoration and Long-Term Management of Contaminated Rural, Urban and Industrial Ecosystems) aimed to provide a holistic decision framework for the selection of optimal restoration strategies for the long-term sustainable management of contaminated areas in Western Europe. A critical evaluation was carried out of countermeasures and waste disposal options, from which compendia of state-of-the-art restoration methods were compiled. A decision support system capable of optimising spatially varying restoration strategies, that considered the level of averted dose, costs (including those of waste disposal) and environmental side effects was developed. Appropriate methods of estimating indirect costs associated with side effects and of communicating with stakeholders were identified. The importance of stakeholder consultation at a local level and of ensuring that any response is site and scenario specific were emphasised. A value matrix approach was suggested as a method of addressing social and ethical issues within the decision-making process, and was designed to be compatible with both the countermeasure compendia and the decision support system. The applicability and usefulness of STRATEGY outputs for food production systems in the medium to long term is assessed.

  6. A network property necessary for concentration robustness

    NASA Astrophysics Data System (ADS)

    Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran

    2016-10-01

    Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications.

  7. A network property necessary for concentration robustness.

    PubMed

    Eloundou-Mbebi, Jeanne M O; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran

    2016-10-19

    Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications.

  8. A network property necessary for concentration robustness

    PubMed Central

    Eloundou-Mbebi, Jeanne M. O.; Küken, Anika; Omranian, Nooshin; Kleessen, Sabrina; Neigenfind, Jost; Basler, Georg; Nikoloski, Zoran

    2016-01-01

    Maintenance of functionality of complex cellular networks and entire organisms exposed to environmental perturbations often depends on concentration robustness of the underlying components. Yet, the reasons and consequences of concentration robustness in large-scale cellular networks remain largely unknown. Here, we derive a necessary condition for concentration robustness based only on the structure of networks endowed with mass action kinetics. The structural condition can be used to design targeted experiments to study concentration robustness. We show that metabolites satisfying the necessary condition are present in metabolic networks from diverse species, suggesting prevalence of this property across kingdoms of life. We also demonstrate that our predictions about concentration robustness of energy-related metabolites are in line with experimental evidence from Escherichia coli. The necessary condition is applicable to mass action biological systems of arbitrary size, and will enable understanding the implications of concentration robustness in genetic engineering strategies and medical applications. PMID:27759015

  9. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  10. An update on the use and investigation of probiotics in health and disease

    PubMed Central

    Sanders, Mary Ellen; Guarner, Francisco; Guerrant, Richard; Holt, Peter R; Quigley, Eamonn MM; Sartor, R Balfour; Sherman, Philip M; Mayer, Emeran A

    2014-01-01

    Probiotics are derived from traditional fermented foods, from beneficial commensals or from the environment. They act through diverse mechanisms affecting the composition or function of the commensal microbiota and by altering host epithelial and immunological responses. Certain probiotic interventions have shown promise in selected clinical conditions where aberrant microbiota have been reported, such as atopic dermatitis, necrotising enterocolitis, pouchitis and possibly irritable bowel syndrome. However, no studies have been conducted that can causally link clinical improvements to probiotic-induced microbiota changes. Whether a disease-prone microbiota pattern can be remodelled to a more robust, resilient and disease-free state by probiotic administration remains a key unanswered question. Progress in this area will be facilitated by: optimising strain, dose and product formulations, including protective commensal species; matching these formulations with selectively responsive subpopulations; and identifying ways to manipulate diet to modify bacterial profiles and metabolism. PMID:23474420

  11. Managing dual warehouses with an incentive policy for deteriorating items

    NASA Astrophysics Data System (ADS)

    Yu, Jonas C. P.; Wang, Kung-Jeng; Lin, Yu-Siang

    2016-02-01

    Distributors in a supply chain usually limit their own warehouse in finite capacity for cost reduction and excess stock is held in a rent warehouse. In this study, we examine inventory control for deteriorating items in a two-warehouse setting. Assuming that there is an incentive offered by a rent warehouse that allows the rental fee to decrease over time, the objective of this study is to maximise the joint profit of the manufacturer and the distributor. An optimisation procedure is developed to derive the optimal joint economic lot size policy. Several criteria are identified to select the most appropriate warehouse configuration and inventory policy on the basis of storage duration of materials in a rent warehouse. Sensitivity analysis is done to examine the results of model robustness. The proposed model enables a manufacturer with a channel distributor to coordinate the use of alternative warehouses, and to maximise the joint profit of the manufacturer and the distributor.

  12. A validated ultra high pressure liquid chromatographic method for qualification and quantification of folic acid in pharmaceutical preparations.

    PubMed

    Deconinck, E; Crevits, S; Baten, P; Courselle, P; De Beer, J

    2011-04-05

    A fully validated UHPLC method for the identification and quantification of folic acid in pharmaceutical preparations was developed. The starting conditions for the development were calculated starting from the HPLC conditions of a validated method. These start conditions were tested on four different UHPLC columns: Grace Vision HT™ C18-P, C18, C18-HL and C18-B (2 mm × 100 mm, 1.5 μm). After selection of the stationary phase, the method was further optimised by testing two aqueous and two organic phases and by adapting to a gradient method. The obtained method was fully validated based on its measurement uncertainty (accuracy profile) and robustness tests. A UHPLC method was obtained for the identification and quantification of folic acid in pharmaceutical preparations, which will cut analysis times and solvent consumption. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  14. Viscoelastic property tuning for reducing noise radiated by switched-reluctance machines

    NASA Astrophysics Data System (ADS)

    Millithaler, Pierre; Dupont, Jean-Baptiste; Ouisse, Morvan; Sadoulet-Reboul, Émeline; Bouhaddi, Noureddine

    2017-10-01

    Switched-reluctance motors (SRM) present major acoustic drawbacks that hinder their use for electric vehicles in spite of widely-acknowledged robustness and low manufacturing costs. Unlike other types of electric machines, a SRM stator is completely encapsulated/potted with a viscoelastic resin. By taking advantage of the high damping capacity that a viscoelastic material has in certain temperature and frequency ranges, this article proposes a tuning methodology for reducing the noise emitted by a SRM in operation. After introducing the aspects the tuning process will focus on, the article details a concrete application consisting in computing representative electromagnetic excitations and then the structural response of the stator including equivalent radiated power levels. An optimised viscoelastic material is determined, with which the peak radiated levels are reduced up to 10 dB in comparison to the initial state. This methodology is implementable for concrete industrial applications as it only relies on common commercial finite-element solvers.

  15. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  16. From the patient to the clinical mycology laboratory: how can we optimise microscopy and culture methods for mould identification?

    PubMed

    Vyzantiadis, Timoleon-Achilleas A; Johnson, Elizabeth M; Kibbler, Christopher C

    2012-06-01

    The identification of fungi relies mainly on morphological criteria. However, there is a need for robust and definitive phenotypic identification procedures in order to evaluate continuously evolving molecular methods. For the future, there is an emerging consensus that a combined (phenotypic and molecular) approach is more powerful for fungal identification, especially for moulds. Most of the procedures used for phenotypic identification are based on experience rather than comparative studies of effectiveness or performance and there is a need for standardisation among mycology laboratories. This review summarises and evaluates the evidence for the major existing phenotypic identification procedures for the predominant causes of opportunistic mould infection. We have concentrated mainly on Aspergillus, Fusarium and mucoraceous mould species, as these are the most important clinically and the ones for which there are the most molecular taxonomic data.

  17. Elicitors as alternative strategy to pesticides in grapevine? Current knowledge on their mode of action from controlled conditions to vineyard.

    PubMed

    Delaunois, Bertrand; Farace, Giovanni; Jeandet, Philippe; Clément, Christophe; Baillieul, Fabienne; Dorey, Stéphan; Cordelier, Sylvain

    2014-04-01

    Development and optimisation of alternative strategies to reduce the use of classic chemical inputs for protection against diseases in vineyard is becoming a necessity. Among these strategies, one of the most promising consists in the stimulation and/or potentiation of the grapevine defence responses by the means of elicitors. Elicitors are highly diverse molecules both in nature and origins. This review aims at providing an overview of the current knowledge on these molecules and will highlight their potential efficacy from the laboratory in controlled conditions to vineyards. Recent findings and concepts (especially on plant innate immunity) and the new terminology (microbe-associated molecular patterns, effectors, etc.) are also discussed in this context. Other objectives of this review are to highlight the difficulty of transferring elicitors use and results from the controlled conditions to the vineyard, to determine their practical and effective use in viticulture and to propose ideas for improving their efficacy in non-controlled conditions.

  18. A framework for learning and planning against switching strategies in repeated games

    NASA Astrophysics Data System (ADS)

    Hernandez-Leal, Pablo; Munoz de Cote, Enrique; Sucar, L. Enrique

    2014-04-01

    Intelligent agents, human or artificial, often change their behaviour as they interact with other agents. For an agent to optimise its performance when interacting with such agents, it must be capable of detecting and adapting according to such changes. This work presents an approach on how to effectively deal with non-stationary switching opponents in a repeated game context. Our main contribution is a framework for online learning and planning against opponents that switch strategies. We present how two opponent modelling techniques work within the framework and prove the usefulness of the approach experimentally in the iterated prisoner's dilemma, when the opponent is modelled as an agent that switches between different strategies (e.g. TFT, Pavlov and Bully). The results of both models were compared against each other and against a state-of-the-art non-stationary reinforcement learning technique. Results reflect that our approach obtains competitive results without needing an offline training phase, as opposed to the state-of-the-art techniques.

  19. Animal models of cartilage repair

    PubMed Central

    Cook, J. L.; Hung, C. T.; Kuroki, K.; Stoker, A. M.; Cook, C. R.; Pfeiffer, F. M.; Sherman, S. L.; Stannard, J. P.

    2014-01-01

    Cartilage repair in terms of replacement, or regeneration of damaged or diseased articular cartilage with functional tissue, is the ‘holy grail’ of joint surgery. A wide spectrum of strategies for cartilage repair currently exists and several of these techniques have been reported to be associated with successful clinical outcomes for appropriately selected indications. However, based on respective advantages, disadvantages, and limitations, no single strategy, or even combination of strategies, provides surgeons with viable options for attaining successful long-term outcomes in the majority of patients. As such, development of novel techniques and optimisation of current techniques need to be, and are, the focus of a great deal of research from the basic science level to clinical trials. Translational research that bridges scientific discoveries to clinical application involves the use of animal models in order to assess safety and efficacy for regulatory approval for human use. This review article provides an overview of animal models for cartilage repair. Cite this article: Bone Joint Res 2014;4:89–94. PMID:24695750

  20. LQ optimal and reaching law-based sliding modes for inventory management systems

    NASA Astrophysics Data System (ADS)

    Ignaciuk, Przemysław; Bartoszewicz, Andrzej

    2012-01-01

    In this article, the theory of discrete sliding-mode control is used to design new supply strategies for periodic-review inventory systems. In the considered systems, the stock used to fulfil an unknown, time-varying demand can be replenished from a single supply source or from multiple suppliers procuring orders with different delays. The proposed strategies guarantee that demand is always entirely satisfied from the on-hand stock (yielding the maximum service level), and the warehouse capacity is not exceeded (which eliminates the cost of emergency storage). In contrast to the classical, stochastic approaches, in this article, we focus on optimising the inventory system dynamics. The parameters of the first control strategy are selected by minimising a quadratic cost functional. Next, it is shown how the system dynamical performance can be improved by applying the concept of a reaching law with the appropriately adjusted reaching phase. The stable, nonoscillatory behaviour of the closed-loop system is demonstrated and the properties of the designed controllers are discussed and strictly proved.

  1. Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.

    2017-06-01

    Emerging water scarcity concerns in many urban regions are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative drought management strategies. Our results show that appropriately designing adaptive risk-of-failure action triggers required stressing them with a comprehensive sample of deeply uncertain factors in the computational search phase of MORDM. Search under the new ensemble of states-of-the-world is shown to fundamentally change perceived performance tradeoffs and substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Search under deep uncertainty enhanced the discovery of how cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be employed jointly to improve regional robustness and decrease robustness conflicts between the utilities. Insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  2. On an efficient multilevel inverter assembly: structural savings and design optimisations

    NASA Astrophysics Data System (ADS)

    Choupan, Reza; Nazarpour, Daryoush; Golshannavaz, Sajjad

    2018-01-01

    This study puts forward an efficient unit cell to be taken in use in multilevel inverter assemblies. The proposed structure is in line with reductions in number of direct current (dc) voltage sources, insulated-gate bipolar transistors (IGBTs), gate driver circuits, installation area, and hence the implementation costs. Such structural savings do not sacrifice the technical performance of the proposed design wherein an increased number of output voltage levels is attained, interestingly. Targeting a techno-economic characteristic, the contemplated structure is included as the key unit of cascaded multilevel inverters. Such extensions require development of applicable design procedures. To this end, two efficient strategies are elaborated to determine the magnitudes of input dc voltage sources. As well, an optimisation process is developed to explore the optimal allocation of different parameters in overall performance of the proposed inverter. These parameters are investigated as the number of IGBTs, dc sources, diodes, and overall blocked voltage on switches. In the lights of these characteristics, a comprehensive analysis is established to compare the proposed design with the conventional and recently developed structures. Detailed simulation and experimental studies are conducted to assess the performance of the proposed design. The obtained results are discussed in depth.

  3. Optimising ICT Effectiveness in Instruction and Learning: Multilevel Transformation Theory and a Pilot Project in Secondary Education

    ERIC Educational Resources Information Center

    Mooij, Ton

    2004-01-01

    Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…

  4. Robust MR-based approaches to quantifying white matter structure and structure/function alterations in Huntington's disease

    PubMed Central

    Steventon, Jessica J.; Trueman, Rebecca C.; Rosser, Anne E.; Jones, Derek K.

    2016-01-01

    Background Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. Method 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Results Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Conclusion Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. PMID:26335798

  5. Robust MR-based approaches to quantifying white matter structure and structure/function alterations in Huntington's disease.

    PubMed

    Steventon, Jessica J; Trueman, Rebecca C; Rosser, Anne E; Jones, Derek K

    2016-05-30

    Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  7. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  8. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  9. [Leadership strategies--the Bible as a guide to management in the health care system].

    PubMed

    Kudlacek, Stefan; Meran, Johannes G

    2006-06-01

    Management and leadership are an integral part of any organisation, to optimise procedures and increase efficiency. Aims, ideals and structures first need to be defined for tasks to be carried out successfully, particularly in difficult times. A good example for the way communities can effectively and with conviction pass on their values and standpoints from generation to generation, grow in strength and also influence their surroundings is provided by religion. This paper focuses leadership provided by charismatic personalities within the Jewish and Christian religions. Monasteries have run hospitals without governmental support ever since the Middle Ages. Leadership within today's health care system calls for a variety of strategies in the different phases of development. In times of limited resources and multifarious societies, leadership implies both a scientific as well as an ethical challenge.

  10. Calibration and simulation of two large wastewater treatment plants operated for nutrient removal.

    PubMed

    Ferrer, J; Morenilla, J J; Bouzas, A; García-Usach, F

    2004-01-01

    Control and optimisation of plant processes has become a priority for WWTP managers. The calibration and verification of a mathematical model provides an important tool for the investigation of advanced control strategies that may assist in the design or optimization of WWTPs. This paper describes the calibration of the ASM2d model for two full scale biological nitrogen and phosphorus removal plants in order to characterize the biological process and to upgrade the plants' performance. Results from simulation showed a good correspondence with experimental data demonstrating that the model and the calibrated parameters were able to predict the behaviour of both WWTPs. Once the calibration and simulation process was finished, a study for each WWTP was done with the aim of improving its performance. Modifications focused on reactor configuration and operation strategies were proposed.

  11. Modelling and evaluating municipal solid waste management strategies in a mega-city: the case of Ho Chi Minh City.

    PubMed

    ThiKimOanh, Le; Bloemhof-Ruwaard, Jacqueline M; van Buuren, Joost Cl; van der Vorst, Jack Gaj; Rulkens, Wim H

    2015-04-01

    Ho Chi Minh City is a large city that will become a mega-city in the near future. The city struggles with a rapidly increasing flow of municipal solid waste and a foreseeable scarcity of land to continue landfilling, the main treatment of municipal solid waste up to now. Therefore, additional municipal solid waste treatment technologies are needed. The objective of this article is to support decision-making towards more sustainable and cost-effective municipal solid waste strategies in developing countries, in particular Vietnam. A quantitative decision support model is developed to optimise the distribution of municipal solid waste from population areas to treatment plants, the treatment technologies and their capacities for the near future given available infrastructure and cost factors. © The Author(s) 2015.

  12. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  13. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  14. Medicines optimisation: priorities and challenges.

    PubMed

    Kaufman, Gerri

    2016-03-23

    Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.

  15. Design and implementation of robust controllers for a gait trainer.

    PubMed

    Wang, F C; Yu, C H; Chou, T Y

    2009-08-01

    This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.

  16. Antibody–Drug Conjugates for Cancer Therapy

    PubMed Central

    Parslow, Adam C.; Parakh, Sagun; Lee, Fook-Thean; Gan, Hui K.; Scott, Andrew M.

    2016-01-01

    Antibody–drug conjugates (ADCs) take advantage of the specificity of a monoclonal antibody to deliver a linked cytotoxic agent directly into a tumour cell. The development of these compounds provides exciting opportunities for improvements in patient care. Here, we review the key issues impacting on the clinical success of ADCs in cancer therapy. Like many other developing therapeutic classes, there remain challenges in the design and optimisation of these compounds. As the clinical applications for ADCs continue to expand, key strategies to improve patient outcomes include better patient selection for treatment and the identification of mechanisms of therapy resistance. PMID:28536381

  17. Scenario-based, closed-loop model predictive control with application to emergency vehicle scheduling

    NASA Astrophysics Data System (ADS)

    Goodwin, Graham. C.; Medioli, Adrian. M.

    2013-08-01

    Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.

  18. A class of multi-period semi-variance portfolio for petroleum exploration and development

    NASA Astrophysics Data System (ADS)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  19. Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning

    DOE PAGES

    Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...

    2016-04-26

    A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.

  20. Takagi-Sugeno fuzzy model based robust dissipative control for uncertain flexible spacecraft with saturated time-delay input.

    PubMed

    Xu, Shidong; Sun, Guanghui; Sun, Weichao

    2017-01-01

    In this paper, the problem of robust dissipative control is investigated for uncertain flexible spacecraft based on Takagi-Sugeno (T-S) fuzzy model with saturated time-delay input. Different from most existing strategies, T-S fuzzy approximation approach is used to model the nonlinear dynamics of flexible spacecraft. Simultaneously, the physical constraints of system, like input delay, input saturation, and parameter uncertainties, are also taken care of in the fuzzy model. By employing Lyapunov-Krasovskii method and convex optimization technique, a novel robust controller is proposed to implement rest-to-rest attitude maneuver for flexible spacecraft, and the guaranteed dissipative performance enables the uncertain closed-loop system to reject the influence of elastic vibrations and external disturbances. Finally, an illustrative design example integrated with simulation results are provided to confirm the applicability and merits of the developed control strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Uncertainty, robustness, and the value of information in managing a population of northern bobwhites

    USGS Publications Warehouse

    Johnson, Fred A.; Hagan, Greg; Palmer, William E.; Kemmerer, Michael

    2014-01-01

    The abundance of northern bobwhites (Colinus virginianus) has decreased throughout their range. Managers often respond by considering improvements in harvest and habitat management practices, but this can be challenging if substantial uncertainty exists concerning the cause(s) of the decline. We were interested in how application of decision science could be used to help managers on a large, public management area in southwestern Florida where the bobwhite is a featured species and where abundance has severely declined. We conducted a workshop with managers and scientists to elicit management objectives, alternative hypotheses concerning population limitation in bobwhites, potential management actions, and predicted management outcomes. Using standard and robust approaches to decision making, we determined that improved water management and perhaps some changes in hunting practices would be expected to produce the best management outcomes in the face of uncertainty about what is limiting bobwhite abundance. We used a criterion called the expected value of perfect information to determine that a robust management strategy may perform nearly as well as an optimal management strategy (i.e., a strategy that is expected to perform best, given the relative importance of different management objectives) with all uncertainty resolved. We used the expected value of partial information to determine that management performance could be increased most by eliminating uncertainty over excessive-harvest and human-disturbance hypotheses. Beyond learning about the factors limiting bobwhites, adoption of a dynamic management strategy, which recognizes temporal changes in resource and environmental conditions, might produce the greatest management benefit. Our research demonstrates that robust approaches to decision making, combined with estimates of the value of information, can offer considerable insight into preferred management approaches when great uncertainty exists about system dynamics and the effects of management.

  2. A General Framework of Persistence Strategies for Biological Systems Helps Explain Domains of Life

    PubMed Central

    Yafremava, Liudmila S.; Wielgos, Monica; Thomas, Suravi; Nasir, Arshan; Wang, Minglei; Mittenthal, Jay E.; Caetano-Anollés, Gustavo

    2012-01-01

    The nature and cause of the division of organisms in superkingdoms is not fully understood. Assuming that environment shapes physiology, here we construct a novel theoretical framework that helps identify general patterns of organism persistence. This framework is based on Jacob von Uexküll’s organism-centric view of the environment and James G. Miller’s view of organisms as matter-energy-information processing molecular machines. Three concepts describe an organism’s environmental niche: scope, umwelt, and gap. Scope denotes the entirety of environmental events and conditions to which the organism is exposed during its lifetime. Umwelt encompasses an organism’s perception of these events. The gap is the organism’s blind spot, the scope that is not covered by umwelt. These concepts bring organisms of different complexity to a common ecological denominator. Ecological and physiological data suggest organisms persist using three strategies: flexibility, robustness, and economy. All organisms use umwelt information to flexibly adapt to environmental change. They implement robustness against environmental perturbations within the gap generally through redundancy and reliability of internal constituents. Both flexibility and robustness improve survival. However, they also incur metabolic matter-energy processing costs, which otherwise could have been used for growth and reproduction. Lineages evolve unique tradeoff solutions among strategies in the space of what we call “a persistence triangle.” Protein domain architecture and other evidence support the preferential use of flexibility and robustness properties. Archaea and Bacteria gravitate toward the triangle’s economy vertex, with Archaea biased toward robustness. Eukarya trade economy for survivability. Protista occupy a saddle manifold separating akaryotes from multicellular organisms. Plants and the more flexible Fungi share an economic stratum, and Metazoa are locked in a positive feedback loop toward flexibility. PMID:23443991

  3. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  4. A Hermeneutic Reading into "What Strategy Is": Ambiguous Means-End Relationship

    ERIC Educational Resources Information Center

    Bakir, Ali; Todorovic, Milan

    2010-01-01

    Given the underutilization of hermeneutic research in organizations and the recognition that we do not know what strategy is, we undertake a hermeneutic reading of authorial texts to develop a robust understanding of strategy. We enter into a self-reflexive dialogue with the text to accomplish a fusion of horizons where we hope to turn our…

  5. Oral relative bioavailability of Dichlorodiphenyltrichloroethane (DDT) in contaminated soil and its prediction using in vitro strategies for exposure refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juhasz, Albert L., E-mail: Albert.Juhasz@unisa.edu

    In this study, the bioavailability of DDTr (sum of DDT, DDD and DDE isomers) in pesticide-contaminated soil was assessed using an in vivo mouse model. DDTr relative bioavailability (RBA) ranged from 18.7±0.9 (As35) to 60.8±7.8% (As36) indicating that a significant portion of soil-bound DDTr was not available for absorption following ingestion. When DDTr bioaccessibility was assessed using the organic Physiologically Based Extraction Test (org-PBET), the inclusion of a sorption sink (silicone cord) enhanced DDTr desorption by up to 20-fold (1.6–3.8% versus 18.9–56.3%) compared to DDTr partitioning into gastrointestinal fluid alone. Enhanced desorption occurred as a result of the silicone cordmore » acting as a reservoir for solubilized DDTr to partition into, thereby creating a flux for further desorption until equilibrium was achieved. When the relationship between in vivo and in vitro data was assessed, a strong correlation was observed between the mouse bioassay and the org-PBET+silicone cord (slope=0.94, y-intercept=3.5, r{sup 2}=0.72) suggesting that the in vitro approach may provide a robust surrogate measure for the prediction of DDTr RBA in contaminated soil. - Highlights: • An optimised mouse assay was used to quantify DDTr relative bioavailability in soil. • DDTr bioaccessibility was also determined using an in vitro sorption sink approach. • A strong correlation was observed between in vivo and in vitro data. • The sorption sink approach may be used to predict DDTr relative bioavailability.« less

  6. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    PubMed

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.

  7. Prognostic nomogram and score to predict overall survival in locally advanced untreated pancreatic cancer (PROLAP)

    PubMed Central

    Vernerey, Dewi; Huguet, Florence; Vienot, Angélique; Goldstein, David; Paget-Bailly, Sophie; Van Laethem, Jean-Luc; Glimelius, Bengt; Artru, Pascal; Moore, Malcolm J; André, Thierry; Mineur, Laurent; Chibaudel, Benoist; Benetkiewicz, Magdalena; Louvet, Christophe; Hammel, Pascal; Bonnetain, Franck

    2016-01-01

    Background: The management of locally advanced pancreatic cancer (LAPC) patients remains controversial. Better discrimination for overall survival (OS) at diagnosis is needed. We address this issue by developing and validating a prognostic nomogram and a score for OS in LAPC (PROLAP). Methods: Analyses were derived from 442 LAPC patients enrolled in the LAP07 trial. The prognostic ability of 30 baseline parameters was evaluated using univariate and multivariate Cox regression analyses. Performance assessment and internal validation of the final model were done with Harrell's C-index, calibration plot and bootstrap sample procedures. On the basis of the final model, a prognostic nomogram and a score were developed, and externally validated in 106 consecutive LAPC patients treated in Besançon Hospital, France. Results: Age, pain, tumour size, albumin and CA 19-9 were independent prognostic factors for OS. The final model had good calibration, acceptable discrimination (C-index=0.60) and robust internal validity. The PROLAP score has the potential to delineate three different prognosis groups with median OS of 15.4, 11.7 and 8.5 months (log-rank P<0.0001). The score ability to discriminate OS was externally confirmed in 63 (59%) patients with complete clinical data derived from a data set of 106 consecutive LAPC patients; median OS of 18.3, 14.1 and 7.6 months for the three groups (log-rank P<0.0001). Conclusions: The PROLAP nomogram and score can accurately predict OS before initiation of induction chemotherapy in LAPC-untreated patients. They may help to optimise clinical trials design and might offer the opportunity to define risk-adapted strategies for LAPC management in the future. PMID:27404456

  8. Improving target coverage and organ-at-risk sparing in intensity-modulated radiotherapy for cervical oesophageal cancer using a simple optimisation method.

    PubMed

    Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi

    2015-01-01

    To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.

  9. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Schild, S; Bues, M

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less

  10. Molecular simulation of the thermophysical properties and phase behaviour of impure CO2 relevant to CCS.

    PubMed

    Cresswell, Alexander J; Wheatley, Richard J; Wilkinson, Richard D; Graham, Richard S

    2016-10-20

    Impurities from the CCS chain can greatly influence the physical properties of CO 2 . This has important design, safety and cost implications for the compression, transport and storage of CO 2 . There is an urgent need to understand and predict the properties of impure CO 2 to assist with CCS implementation. However, CCS presents demanding modelling requirements. A suitable model must both accurately and robustly predict CO 2 phase behaviour over a wide range of temperatures and pressures, and maintain that predictive power for CO 2 mixtures with numerous, mutually interacting chemical species. A promising technique to address this task is molecular simulation. It offers a molecular approach, with foundations in firmly established physical principles, along with the potential to predict the wide range of physical properties required for CCS. The quality of predictions from molecular simulation depends on accurate force-fields to describe the interactions between CO 2 and other molecules. Unfortunately, there is currently no universally applicable method to obtain force-fields suitable for molecular simulation. In this paper we present two methods of obtaining force-fields: the first being semi-empirical and the second using ab initio quantum-chemical calculations. In the first approach we optimise the impurity force-field against measurements of the phase and pressure-volume behaviour of CO 2 binary mixtures with N 2 , O 2 , Ar and H 2 . A gradient-free optimiser allows us to use the simulation itself as the underlying model. This leads to accurate and robust predictions under conditions relevant to CCS. In the second approach we use quantum-chemical calculations to produce ab initio evaluations of the interactions between CO 2 and relevant impurities, taking N 2 as an exemplar. We use a modest number of these calculations to train a machine-learning algorithm, known as a Gaussian process, to describe these data. The resulting model is then able to accurately predict a much broader set of ab initio force-field calculations at comparatively low numerical cost. Although our method is not yet ready to be implemented in a molecular simulation, we outline the necessary steps here. Such simulations have the potential to deliver first-principles simulation of the thermodynamic properties of impure CO 2 , without fitting to experimental data.

  11. Low-cost sensors to monitor groundwater drought in Somalia

    NASA Astrophysics Data System (ADS)

    Buytaert, W.; Ochoa-Tocachi, B. F.; Caniglia, D.; Haibe, K.; Butler, A. P.

    2017-12-01

    Somalia is one of the poorest countries in the world, devastated by conflict and suffering from the most severe droughts in living memory. Over 6 million people are in need of assistance, and about 3 million are threatened with famine. In April 2017, the WHO estimated that more than 25,000 people have been struck by cholera or acute watery diarrhoea and this number is rising quickly. About half a million Somalis have been displaced internally, many of which in search of water. Some 3 million pastoralists have lost 70% of livestock as a result of the drought. Humanitarian organisations and government agencies invest large amounts of resources to alleviate these conditions. It is paramount to inform the design, focus, and optimisation of these interventions by monitoring and quantifying water resources. Yet, regions such as Somalia are extremely sparsely gauged as a result of a combination of lack of resources and technical expertise, as well as the harsh geographical and geopolitical conditions. Low-cost, robust, and reliable sensors may provide a potential solution to this problem. We present the results of a research project that aimed to leverage new developments in sensor, logger, and data transmission technologies to develop low-cost water level sensors to monitor hand-dug groundwater wells in real time. We tested 3 types of sensor types, i.e. pressure transducers, ultrasound-based distance sensors, and lidar, which were coupled to low-cost logging systems. The different designs were tested both in laboratory conditions, and in-situ in hand-dug wells in Somaliland. Our results show that it is technically possible to build sensors with a total cost of around US$250 each, which are fit-for-purpose for the required application. In-situ deployment over a period of 2 months highlights their robustness despite severe logistical and practical challenges, though further tests are required to understand their long-term reliability. Operating the sensors at one-minute resolution makes it possible to record ground water levels, but also to estimate precise timing and volumes of water use. The sensors, in combination with other upcoming technologies such as remote sensing and wireless transmission, hold promise for improved knowledge generation to optimise the intervention of development and humanitarian organisations.

  12. The Stochastic Evolutionary Game for a Population of Biological Networks Under Natural Selection

    PubMed Central

    Chen, Bor-Sen; Ho, Shih-Ju

    2014-01-01

    In this study, a population of evolutionary biological networks is described by a stochastic dynamic system with intrinsic random parameter fluctuations due to genetic variations and external disturbances caused by environmental changes in the evolutionary process. Since information on environmental changes is unavailable and their occurrence is unpredictable, they can be considered as a game player with the potential to destroy phenotypic stability. The biological network needs to develop an evolutionary strategy to improve phenotypic stability as much as possible, so it can be considered as another game player in the evolutionary process, ie, a stochastic Nash game of minimizing the maximum network evolution level caused by the worst environmental disturbances. Based on the nonlinear stochastic evolutionary game strategy, we find that some genetic variations can be used in natural selection to construct negative feedback loops, efficiently improving network robustness. This provides larger genetic robustness as a buffer against neutral genetic variations, as well as larger environmental robustness to resist environmental disturbances and maintain a network phenotypic traits in the evolutionary process. In this situation, the robust phenotypic traits of stochastic biological networks can be more frequently selected by natural selection in evolution. However, if the harbored neutral genetic variations are accumulated to a sufficiently large degree, and environmental disturbances are strong enough that the network robustness can no longer confer enough genetic robustness and environmental robustness, then the phenotype robustness might break down. In this case, a network phenotypic trait may be pushed from one equilibrium point to another, changing the phenotypic trait and starting a new phase of network evolution through the hidden neutral genetic variations harbored in network robustness by adaptive evolution. Further, the proposed evolutionary game is extended to an n-tuple evolutionary game of stochastic biological networks with m players (competitive populations) and k environmental dynamics. PMID:24558296

  13. Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project

    DTIC Science & Technology

    2005-05-01

    thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of

  14. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.

  15. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE PAGES

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William; ...

    2017-02-01

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  16. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  17. Adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy for three-dimensional visualization

    NASA Astrophysics Data System (ADS)

    Hayat, Khizar; Puech, William; Gesquière, Gilles

    2010-04-01

    We propose an adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy to integrate disparate data, needed for a typical 3-D visualization, into a single JPEG2000 format file. JPEG2000 encoding provides a standard format on one hand and the needed multiresolution for scalability on the other. The method has the potential of being imperceptible and robust at the same time. While the spread spectrum (SS) methods are known for the high robustness they offer, our data-hiding strategy is removable at the same time, which ensures highest possible visualization quality. The SS embedding of the discrete wavelet transform (DWT)-domain depth map is carried out in transform domain YCrCb components from the JPEG2000 coding stream just after the DWT stage. To maintain synchronization, the embedding is carried out while taking into account the correspondence of subbands. Since security is not the immediate concern, we are at liberty with the strength of embedding. This permits us to increase the robustness and bring the reversibility of our method. To estimate the maximum tolerable error in the depth map according to a given viewpoint, a human visual system (HVS)-based psychovisual analysis is also presented.

  18. Climate, Land-, Energy-, Water-use simulations (CLEWs) in Mauritius - an integrated optimisation approach

    NASA Astrophysics Data System (ADS)

    Alfstad, Thomas; Howells, Mark; Rogner, Holger; Ramos, Eunice; Zepeda, Eduardo

    2016-04-01

    The Climate, Land, Energy and Water (CLEW) framework is a set of methodologies for integrated assessment of resource systems. It was developed to provide a means to simultaneously address matters pertaining to energy, water and food security. This is done while both considering the impact that the utilization of these resources have on our climate, as well as how our ability to continue using these resources could be impacted by climate change. CLEW is being applied in Mauritius to provide policy relevant analysis for sustainable development. The work aims to explore the interplay among the different elements of a national sustainable development strategy. A driving motivation is to address issues pertaining to policy cohesion, by exploring cross-sectoral impacts of individual policies and measures. The analysis explores how policies and actions intended to promote sustainability, have ramifications beyond the sector of the economy where it is applied. A primary concern is to ensure that efforts undertaken in pursuit of one policy goal do not inadvertently compromise progress towards attaining goals in other areas. Conversely there may be instances where an action has multiple benefits across various areas. Identifying such trade-offs and synergies can provide additional insights into development policy and support formulation of robust sustainable development strategies. The agreed sustainable development goals clearly illustrate the multi-faceted and multi-dimensional nature of the development challenge, with many overlapping and interlinked concerns. This work focuses on the link between food, energy, water and climate policy, which has shown to be particularly closely intertwined. In Mauritius, the highly interlinked and interdependent nature of the energy and sugar industries for example, highlights the need for coherent and integrated assessment of the role of these sectors in support of sustainable development in the country. Promoting energy self-sufficiency, cutting carbon emissions, adapting to climate change and supporting incomes in the agricultural sector for instance are not separate goals, but interlinked ones, and a holistic and inclusive view of policy formulation is likely to lead to more sustainable outcomes. This presentation will share the findings and lessons learned from this work. .

  19. Enhancing the informed consent process for critical care research: strategies from a thromboprophylaxis trial.

    PubMed

    Smith, Orla M; McDonald, Ellen; Zytaruk, Nicole; Foster, Denise; Matte, Andrea; Clarke, France; Fleury, Suzie; Krause, Katie; McArdle, Tracey; Skrobik, Yoanna; Cook, Deborah J

    2013-12-01

    Critically ill patients lack capacity for decisions about research participation. Consent to enrol these patients in studies is typically obtained from substitute decision-makers. To present strategies that may optimise the process of obtaining informed consent from substitute decision-makers for participation of critically ill patients in trials. We use examples from a randomised trial of heparin thromboprophylaxis in the intensive care unit (PROTECT, clinicaltrials.gov NCT00182143). 3764 patients were randomised, with an informed consent rate of 82%; 90% of consents were obtained from substitute decision-makers. North American PROTECT research coordinators attended three meetings to discuss enrolment: (1) Trial start-up (January 2006); (2) Near trial closure (January 2010); and (3) Post-publication (April 2011). Data were derived from slide presentations, field notes from break-out groups and plenary discussions, then analysed inductively. We derived three phases for the informed consent process: (1) Preparation for the Consent Encounter; (2) The Consent Encounter; and (3) Follow-up to the Consent Encounter. Specific strategies emerged for each phase: Phase 1 (four strategies); Phase 2 (six strategies); and Phase 3 (three strategies). We identified 13 strategies that may improve the process of obtaining informed consent from substitute decision-makers and be generalisable to other settings and studies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Estimation of dynamic treatment strategies for maintenance therapy of children with acute lymphoblastic leukaemia: an application of history-adjusted marginal structural models.

    PubMed

    Rosthøj, S; Keiding, N; Schmiegelow, K

    2012-02-28

    Childhood acute lymphoblastic leukaemia is treated with long-term intensive chemotherapy. During the latter part of the treatment, the maintenance therapy, the patients receive oral doses of two cytostatics. The doses are tailored to blood counts measured on a weekly basis, and the treatment is therefore highly dynamic. In 1992-1996, the Nordic Society of Paediatric Haematology and Oncology (NOPHO) conducted a randomised study (NOPHO-ALL-92) to investigate the effect of a new and more sophisticated dynamic treatment strategy. Unexpectedly, the new strategy worsened the outcome for the girls, whereas there were no treatment differences for the boys. There are as yet no general guidelines for optimising the treatment. On basis of the data from this study, our goal is to formulate an alternative dosing strategy. We use recently developed methods proposed by van der Laan et al. to obtain statistical models that may be used in the guidance of how the physicians should assign the doses to the patients to obtain the target of the treatment. We present a possible strategy and discuss the reliability of this strategy. The implementation is complicated, and we touch upon the limitations of the methods in relation to the formulation of alternative dosing strategies for the maintenance therapy. Copyright © 2011 John Wiley & Sons, Ltd.

  1. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  2. ICRP publication 121: radiological protection in paediatric diagnostic and interventional radiology.

    PubMed

    Khong, P-L; Ringertz, H; Donoghue, V; Frush, D; Rehani, M; Appelgate, K; Sanchez, R

    2013-04-01

    Paediatric patients have a higher average risk of developing cancer compared with adults receiving the same dose. The longer life expectancy in children allows more time for any harmful effects of radiation to manifest, and developing organs and tissues are more sensitive to the effects of radiation. This publication aims to provide guiding principles of radiological protection for referring clinicians and clinical staff performing diagnostic imaging and interventional procedures for paediatric patients. It begins with a brief description of the basic concepts of radiological protection, followed by the general aspects of radiological protection, including principles of justification and optimisation. Guidelines and suggestions for radiological protection in specific modalities - radiography and fluoroscopy, interventional radiology, and computed tomography - are subsequently covered in depth. The report concludes with a summary and recommendations. The importance of rigorous justification of radiological procedures is emphasised for every procedure involving ionising radiation, and the use of imaging modalities that are non-ionising should always be considered. The basic aim of optimisation of radiological protection is to adjust imaging parameters and institute protective measures such that the required image is obtained with the lowest possible dose of radiation, and that net benefit is maximised to maintain sufficient quality for diagnostic interpretation. Special consideration should be given to the availability of dose reduction measures when purchasing new imaging equipment for paediatric use. One of the unique aspects of paediatric imaging is with regards to the wide range in patient size (and weight), therefore requiring special attention to optimisation and modification of equipment, technique, and imaging parameters. Examples of good radiographic and fluoroscopic technique include attention to patient positioning, field size and adequate collimation, use of protective shielding, optimisation of exposure factors, use of pulsed fluoroscopy, limiting fluoroscopy time, etc. Major paediatric interventional procedures should be performed by experienced paediatric interventional operators, and a second, specific level of training in radiological protection is desirable (in some countries, this is mandatory). For computed tomography, dose reduction should be optimised by the adjustment of scan parameters (such as mA, kVp, and pitch) according to patient weight or age, region scanned, and study indication (e.g. images with greater noise should be accepted if they are of sufficient diagnostic quality). Other strategies include restricting multiphase examination protocols, avoiding overlapping of scan regions, and only scanning the area in question. Up-to-date dose reduction technology such as tube current modulation, organ-based dose modulation, auto kV technology, and iterative reconstruction should be utilised when appropriate. It is anticipated that this publication will assist institutions in encouraging the standardisation of procedures, and that it may help increase awareness and ultimately improve practices for the benefit of patients. Copyright © 2012. Published by Elsevier Ltd.

  3. Robust Learning Control Design for Quantum Unitary Transformations.

    PubMed

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  4. A Method for Decentralised Optimisation in Networks

    NASA Astrophysics Data System (ADS)

    Saramäki, Jari

    2005-06-01

    We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.

  5. Thermal buckling optimisation of composite plates using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Kamarian, S.; Shakeri, M.; Yas, M. H.

    2017-07-01

    Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.

  6. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  7. Design of experiment approach for the process optimisation of microwave assisted extraction of lupeol from Ficus racemosa leaves using response surface methodology.

    PubMed

    Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C

    2013-01-01

    Triterpenoids are a group of important phytocomponents from Ficus racemosa (syn. Ficus glomerata Roxb.) that are known to possess diverse pharmacological activities and which have prompted the development of various extraction techniques and strategies for its better utilisation. To develop an effective, rapid and ecofriendly microwave-assisted extraction (MAE) strategy to optimise the extraction of a potent bioactive triterpenoid compound, lupeol, from young leaves of Ficus racemosa using response surface methodology (RSM) for industrial scale-up. Initially a Plackett-Burman design matrix was applied to identify the most significant extraction variables amongst microwave power, irradiation time, particle size, solvent:sample ratio loading, varying solvent strength and pre-leaching time on lupeol extraction. Among the six variables tested, microwave power, irradiation time and solvent-sample/loading ratio were found to have a significant effect (P < 0.05) on lupeol extraction and were fitted to a Box-Behnken-design-generated quadratic polynomial equation to predict optimal extraction conditions as well as to locate operability regions with maximum yield. The optimal conditions were microwave power of 65.67% of 700 W, extraction time of 4.27 min and solvent-sample ratio loading of 21.33 mL/g. Confirmation trials under the optimal conditions gave an experimental yield (18.52 µg/g of dry leaves) close to the RSM predicted value of 18.71 µg/g. Under the optimal conditions the mathematical model was found to be well fitted with the experimental data. The MAE was found to be a more rapid, convenient and appropriate extraction method, with a higher yield and lower solvent consumption when compared with conventional extraction techniques. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Strategy for the lowering and the assessment of exposure to nanoparticles at workspace - Case of study concerning the potential emission of nanoparticles of Lead in an epitaxy laboratory

    NASA Astrophysics Data System (ADS)

    Artous, Sébastien; Zimmermann, Eric; Douissard, Paul-Antoine; Locatelli, Dominique; Motellier, Sylvie; Derrough, Samir

    2015-05-01

    The implementation in many products of manufactured nanoparticles is growing fast and raises new questions. For this purpose, the CEA - NanoSafety Platform is developing various research topics for health and safety, environment and nanoparticles exposure in professional activities. The containment optimisation for the exposition lowering, then the exposure assessment to nanoparticles is a strategy for safety improvement at workplace and workspace. The lowering step consists in an optimisation of dynamic and static containment at workplace and/or workspace. Generally, the exposure risk due to the presence of nanoparticles substances does not allow modifying the parameters of containment at workplace and/or workspace. Therefore, gaseous or nanoparticulate tracers are used to evaluate performances of containment. Using a tracer allows to modify safely the parameters of the dynamic containment (ventilation, flow, speed) and to study several configurations of static containment. Moreover, a tracer allows simulating accidental or incidental situation. As a result, a safety procedure can be written more easily in order to manage this type of situation. The step of measurement and characterization of aerosols can therefore be used to assess the exposition at workplace and workspace. The case of study, aim of this paper, concerns the potential emission of Lead nanoparticles at the exhaust of a furnace in an epitaxy laboratory. The use of Helium tracer to evaluate the performance of containment is firstly studied. Secondly, the exposure assessment is characterised in accordance with the French guide “recommendations for characterizing potential emissions and exposure to aerosols released from nanomaterials in workplace operations”. Thirdly the aerosols are sampled, on several places, using collection membranes to try to detect traces of Lead in air.

  9. Toward a bioethical framework for antibiotic use, antimicrobial resistance and for empirically designing ethically robust strategies to protect human health: a research protocol

    PubMed Central

    Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia

    2017-01-01

    Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches. PMID:28459355

  10. Optimisation of active suspension control inputs for improved vehicle handling performance

    NASA Astrophysics Data System (ADS)

    Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor

    2016-11-01

    Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.

  11. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    NASA Astrophysics Data System (ADS)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  12. DryLab® optimised two-dimensional high performance liquid chromatography for differentiation of ephedrine and pseudoephedrine based methamphetamine samples.

    PubMed

    Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A

    2014-11-01

    In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.

  13. Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks

    DTIC Science & Technology

    2015-04-01

    UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace

  14. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    NASA Astrophysics Data System (ADS)

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  15. Towards a Clinical Decision Support System for External Beam Radiation Oncology Prostate Cancer Patients: Proton vs. Photon Radiotherapy? A Radiobiological Study of Robustness and Stability

    PubMed Central

    Walsh, Seán; Roelofs, Erik; Kuess, Peter; van Wijk, Yvonka; Lambin, Philippe; Jones, Bleddyn; Verhaegen, Frank

    2018-01-01

    We present a methodology which can be utilized to select proton or photon radiotherapy in prostate cancer patients. Four state-of-the-art competing treatment modalities were compared (by way of an in silico trial) for a cohort of 25 prostate cancer patients, with and without correction strategies for prostate displacements. Metrics measured from clinical image guidance systems were used. Three correction strategies were investigated; no-correction, extended-no-action-limit, and online-correction. Clinical efficacy was estimated via radiobiological models incorporating robustness (how probable a given treatment plan was delivered) and stability (the consistency between the probable best and worst delivered treatments at the 95% confidence limit). The results obtained at the cohort level enabled the determination of a threshold for likely clinical benefit at the individual level. Depending on the imaging system and correction strategy; 24%, 32% and 44% of patients were identified as suitable candidates for proton therapy. For the constraints of this study: Intensity-modulated proton therapy with online-correction was on average the most effective modality. Irrespective of the imaging system, each treatment modality is similar in terms of robustness, with and without the correction strategies. Conversely, there is substantial variation in stability between the treatment modalities, which is greatly reduced by correction strategies. This study provides a ‘proof-of-concept’ methodology to enable the prospective identification of individual patients that will most likely (above a certain threshold) benefit from proton therapy. PMID:29463018

  16. Climate change on the Colorado River: a method to search for robust management strategies

    NASA Astrophysics Data System (ADS)

    Keefe, R.; Fischbach, J. R.

    2010-12-01

    The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.

  17. Optimising experimental design for MEG resting state functional connectivity measurement.

    PubMed

    Liuzzi, Lucrezia; Gascoyne, Lauren E; Tewarie, Prejaas K; Barratt, Eleanor L; Boto, Elena; Brookes, Matthew J

    2017-07-15

    The study of functional connectivity using magnetoencephalography (MEG) is an expanding area of neuroimaging, and adds an extra dimension to the more common assessments made using fMRI. The importance of such metrics is growing, with recent demonstrations of their utility in clinical research, however previous reports suggest that whilst group level resting state connectivity is robust, single session recordings lack repeatability. Such robustness is critical if MEG measures in individual subjects are to prove clinically valuable. In the present paper, we test how practical aspects of experimental design affect the intra-subject repeatability of MEG findings; specifically we assess the effect of co-registration method and data recording duration. We show that the use of a foam head-cast, which is known to improve co-registration accuracy, increased significantly the between session repeatability of both beamformer reconstruction and connectivity estimation. We also show that recording duration is a critical parameter, with large improvements in repeatability apparent when using ten minute, compared to five minute recordings. Further analyses suggest that the origin of this latter effect is not underpinned by technical aspects of source reconstruction, but rather by a genuine effect of brain state; short recordings are simply inefficient at capturing the canonical MEG network in a single subject. Our results provide important insights on experimental design and will prove valuable for future MEG connectivity studies. Copyright © 2016. Published by Elsevier Inc.

  18. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  19. 3D Polyaniline Architecture by Concurrent Inorganic and Organic Acid Doping for Superior and Robust High Rate Supercapacitor Performance

    PubMed Central

    Gawli, Yogesh; Banerjee, Abhik; Dhakras, Dipti; Deo, Meenal; Bulani, Dinesh; Wadgaonkar, Prakash; Shelke, Manjusha; Ogale, Satishchandra

    2016-01-01

    A good high rate supercapacitor performance requires a fine control of morphological (surface area and pore size distribution) and electrical properties of the electrode materials. Polyaniline (PANI) is an interesting material in supercapacitor context because it stores energy Faradaically. However in conventional inorganic (e.g. HCl) acid doping, the conductivity is high but the morphological features are undesirable. On the other hand, in weak organic acid (e.g. phytic acid) doping, interesting and desirable 3D connected morphological features are attained but the conductivity is poorer. Here the synergy of the positive quality factors of these two acid doping approaches is realized by concurrent and optimized strong-inorganic (HCl) and weak-organic (phytic) acid doping, resulting in a molecular composite material that renders impressive and robust supercapacitor performance. Thus, a nearly constant high specific capacitance of 350 F g−1 is realized for the optimised case of binary doping over the entire range of 1 A g−1 to 40 A g−1 with stability of 500 cycles at 40 A g−1. Frequency dependant conductivity measurements show that the optimized co-doped case is more metallic than separately doped materials. This transport property emanates from the unique 3D single molecular character of such system. PMID:26867570

  20. Optimisation techniques in vaginal cuff brachytherapy.

    PubMed

    Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A

    2009-11-01

    The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.

Top