Sample records for two-level optimisation principle

  1. The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management

    PubMed Central

    Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo

    2017-01-01

    Abstract Background and Aims The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012–2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. Methods During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. Results The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Conclusions Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. PMID:28981622

  2. The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management.

    PubMed

    Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo

    2017-10-27

    The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012-2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. Copyright © 2017 European Crohn’s and Colitis Organisation (ECCO). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com

  3. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  4. Optimisation of assembly scheduling in VCIM systems using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Dao, Son Duy; Abhary, Kazem; Marian, Romeo

    2017-09-01

    Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.

  5. Optimisation of confinement in a fusion reactor using a nonlinear turbulence model

    NASA Astrophysics Data System (ADS)

    Highcock, E. G.; Mandell, N. R.; Barnes, M.

    2018-04-01

    The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.

  6. Ethics and radiation protection.

    PubMed

    Hansson, Sven Ove

    2007-06-01

    Some of the major problems in radiation protection are closely connected to issues that have a long, independent tradition in moral philosophy. This contribution focuses on two of these issues. One is the relationship between the protection of individuals and optimisation on the collective level, and the other is the relative valuation of future versus immediate damage. Some of the intellectual tools that have been developed by philosophers can be useful in radiation protection. On the other hand, philosophers have much to learn from radiation protectors, not least when it comes to finding pragmatic solutions to problems that may be intractable in principle.

  7. Computer-based teaching module design: principles derived from learning theories.

    PubMed

    Lau, K H Vincent

    2014-03-01

    The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to take advantage of this unique teaching format as it gains increasing importance in medical education. © 2014 John Wiley & Sons Ltd.

  8. Optimality Principles in the Regulation of Metabolic Networks

    PubMed Central

    Berkhout, Jan; Bruggeman, Frank J.; Teusink, Bas

    2012-01-01

    One of the challenging tasks in systems biology is to understand how molecular networks give rise to emergent functionality and whether universal design principles apply to molecular networks. To achieve this, the biophysical, evolutionary and physiological constraints that act on those networks need to be identified in addition to the characterisation of the molecular components and interactions. Then, the cellular “task” of the network—its function—should be identified. A network contributes to organismal fitness through its function. The premise is that the same functions are often implemented in different organisms by the same type of network; hence, the concept of design principles. In biology, due to the strong forces of selective pressure and natural selection, network functions can often be understood as the outcome of fitness optimisation. The hypothesis of fitness optimisation to understand the design of a network has proven to be a powerful strategy. Here, we outline the use of several optimisation principles applied to biological networks, with an emphasis on metabolic regulatory networks. We discuss the different objective functions and constraints that are considered and the kind of understanding that they provide. PMID:24957646

  9. Optimality principles in the regulation of metabolic networks.

    PubMed

    Berkhout, Jan; Bruggeman, Frank J; Teusink, Bas

    2012-08-29

    One of the challenging tasks in systems biology is to understand how molecular networks give rise to emergent functionality and whether universal design principles apply to molecular networks. To achieve this, the biophysical, evolutionary and physiological constraints that act on those networks need to be identified in addition to the characterisation of the molecular components and interactions. Then, the cellular "task" of the network-its function-should be identified. A network contributes to organismal fitness through its function. The premise is that the same functions are often implemented in different organisms by the same type of network; hence, the concept of design principles. In biology, due to the strong forces of selective pressure and natural selection, network functions can often be understood as the outcome of fitness optimisation. The hypothesis of fitness optimisation to understand the design of a network has proven to be a powerful strategy. Here, we outline the use of several optimisation principles applied to biological networks, with an emphasis on metabolic regulatory networks. We discuss the different objective functions and constraints that are considered and the kind of understanding that they provide.

  10. Zipf's Law of Abbreviation and the Principle of Least Effort: Language users optimise a miniature lexicon for efficient communication.

    PubMed

    Kanwal, Jasmeen; Smith, Kenny; Culbertson, Jennifer; Kirby, Simon

    2017-08-01

    The linguist George Kingsley Zipf made a now classic observation about the relationship between a word's length and its frequency; the more frequent a word is, the shorter it tends to be. He claimed that this "Law of Abbreviation" is a universal structural property of language. The Law of Abbreviation has since been documented in a wide range of human languages, and extended to animal communication systems and even computer programming languages. Zipf hypothesised that this universal design feature arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently-his famous Principle of Least Effort. In this study, we use a miniature artificial language learning paradigm to provide direct experimental evidence for this explanatory hypothesis. We show that language users optimise form-meaning mappings only when pressures for accuracy and efficiency both operate during a communicative task, supporting Zipf's conjecture that the Principle of Least Effort can explain this universal feature of word length distributions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Optimal dynamic pricing and replenishment policy for perishable items with inventory-level-dependent demand

    NASA Astrophysics Data System (ADS)

    Lu, Lihao; Zhang, Jianxiong; Tang, Wansheng

    2016-04-01

    An inventory system for perishable items with limited replenishment capacity is introduced in this paper. The demand rate depends on the stock quantity displayed in the store as well as the sales price. With the goal to realise profit maximisation, an optimisation problem is addressed to seek for the optimal joint dynamic pricing and replenishment policy which is obtained by solving the optimisation problem with Pontryagin's maximum principle. A joint mixed policy, in which the sales price is a static decision variable and the replenishment rate remains to be a dynamic decision variable, is presented to compare with the joint dynamic policy. Numerical results demonstrate the advantages of the joint dynamic one, and further show the effects of different system parameters on the optimal joint dynamic policy and the maximal total profit.

  12. Multiobjective optimisation of bogie suspension to boost speed on curves

    NASA Astrophysics Data System (ADS)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  13. Building in efficacy: developing solutions to combat drug-resistant S. pneumoniae.

    PubMed

    Jacobs, M R

    2004-04-01

    The development of our understanding of the pharmacokinetic (PK) and pharmacodynamic (PD) principles that determine antimicrobial efficacy has advanced substantially over the last 10 years. We are now in a position to use PK/PD principles to set targets for antimicrobial design and optimisation so that we can predict eradication of specific pathogens or resistant variants when agents are used clinically. Optimisation of PK/PD parameters to enable the treatment of resistant pathogens with oral agents may not be possible with many current agents, such as some cephalosporins, macrolides and fluoroquinolones. Aminopenicillins, however, such as amoxicillin, have linear PK and have a good safety profile even at high doses. The new pharmacokinetically enhanced oral formulation of amoxicillin/clavulanate, 2000/125 mg twice daily, was designed using PK/PD principles to be able to eradicate Streptococcus pneumoniae with amoxicillin MICs of up to and including 4 mg/L, which includes most penicillin-resistant isolates. For amoxicillin and amoxicillin/clavulanate, a time above MIC (T > MIC) of 35-40% of the dosing interval (based on blood levels) is predictive of high bacteriological efficacy. This target was met by the design of a unique bilayer tablet incorporating 437.5 mg of sustained-release sodium amoxicillin in one layer plus 562.5 mg of immediate-release amoxicillin trihydrate and 62.5 mg of clavulanate potassium in the second layer, with two tablets administered for each dose. This unique design extends the bacterial killing time by increasing the T > MIC to 49% of the dosing interval against pathogens with MICs of 4 mg/L, and 60% of the dosing interval against pathogens with MICs of 2 mg/L. Based on these results, this new amoxicillin/clavulanate formulation should be highly effective in treating respiratory tract infections due to drug-resistant S. pneumoniae as well as beta-lactamase-producing pathogens, such as Haemophilus influenzae and Moraxella catarrhalis.

  14. Medicines optimisation: priorities and challenges.

    PubMed

    Kaufman, Gerri

    2016-03-23

    Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.

  15. The optimisation approach of ALARA in nuclear practice: an early application of the precautionary principle. Scientific uncertainty versus legal uncertainty.

    PubMed

    Lierman, S; Veuchelen, L

    2005-01-01

    The late health effects of exposure to low doses of ionising radiation are subject to scientific controversy: one view finds threats of high cancer incidence exaggerated, while the other view thinks the effects are underestimated. Both views have good scientific arguments in favour of them. Since the nuclear field, both industry and medicine have had to deal with this controversy for many decades. One can argue that the optimisation approach to keep the effective doses as low as reasonably achievable, taking economic and social factors into account (ALARA), is a precautionary approach. However, because of these stochastic effects, no scientific proof can be provided. This paper explores how ALARA and the Precautionary Principle are influential in the legal field and in particular in tort law, because liability should be a strong incentive for safer behaviour. This so-called "deterrence effect" of liability seems to evaporate in today's technical and highly complex society, in particular when dealing with the late health effects of low doses of ionising radiation. Two main issues will be dealt with in the paper: 1. How are the health risks attributable to "low doses" of radiation regulated in nuclear law and what lessons can be learned from the field of radiation protection? 2. What does ALARA have to inform the discussion of the Precautionary Principle and vice-versa, in particular, as far as legal sanctions and liability are concerned? It will be shown that the Precautionary Principle has not yet been sufficiently implemented into nuclear law.

  16. Simulation in paediatric urology and surgery. Part 1: An overview of educational theory.

    PubMed

    Nataraja, Ramesh M; Webb, Nathalie; Lopez, Pedro-Jose

    2018-03-01

    Surgical training has changed radically in the last few decades. The traditional Halstedian model of time-bound apprenticeship has been replaced with competency-based training. Advanced understanding of mastery learning principles has vastly altered educational methodology in surgical training, in terms of instructional design, delivery of educational content, assessment of learning, and programmatic evaluation. As part of this educational revolution, fundamentals of simulation-based education have been adopted into all levels and aspects of surgical training, requiring an understanding of concepts of fidelity and realism and the impact they have on learning. There are many educational principles and theories that can help clinical teachers understand the way that their trainees learn. In the acquisition of surgical expertise, concepts of mastery learning, deliberate practice, and experiential learning are particularly important. Furthermore, surgical teachers need to understand the principles of effective feedback, which is essential to all forms of skills learning. This article, the first of two papers, presents an overview of relevant learning theory for the busy paediatric surgeon and urologist. Seeking to introduce the concepts underpinning current changes in surgical education and training, providing practical tips to optimise teaching endeavours. Copyright © 2018 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  17. Cesarean delivery on maternal request: Can the ethical problem be solved by the principlist approach?

    PubMed Central

    Nilstun, Tore; Habiba, Marwan; Lingman, Göran; Saracci, Rodolfo; Da Frè, Monica; Cuttini, Marina

    2008-01-01

    In this article, we use the principlist approach to identify, analyse and attempt to solve the ethical problem raised by a pregnant woman's request for cesarean delivery in absence of medical indications. We use two different types of premises: factual (facts about cesarean delivery and specifically attitudes of obstetricians as derived from the EUROBS European study) and value premises (principles of beneficence and non-maleficence, respect for autonomy and justice). Beneficence/non-maleficence entails physicians' responsibility to minimise harms and maximise benefits. Avoiding its inherent risks makes a prima facie case against cesarean section without medical indication. However, as vaginal delivery can have unintended consequences, there is a need to balance the somewhat dissimilar risks and benefits. The principle of autonomy poses a challenge in case of disagreement between the pregnant woman and the physician. Improved communication aimed to enable better informed choice may overcome some instances of disagreement. The principle of justice prohibits unfair discrimination, and broadly favours optimising resource utilisation. Available evidence supports vaginal birth in uncomplicated term pregnancies as the standard of care. The principlist approach offered a useful framework for ethical analysis of cesarean delivery on maternal request, identified the rights and duties of those involved, and helped reach a conclusion, although conflict at the individual level may remain challenging. PMID:18559083

  18. Cesarean delivery on maternal request: can the ethical problem be solved by the principlist approach?

    PubMed

    Nilstun, Tore; Habiba, Marwan; Lingman, Göran; Saracci, Rodolfo; Da Frè, Monica; Cuttini, Marina

    2008-06-17

    In this article, we use the principlist approach to identify, analyse and attempt to solve the ethical problem raised by a pregnant woman's request for cesarean delivery in absence of medical indications. We use two different types of premises: factual (facts about cesarean delivery and specifically attitudes of obstetricians as derived from the EUROBS European study) and value premises (principles of beneficence and non-maleficence, respect for autonomy and justice).Beneficence/non-maleficence entails physicians' responsibility to minimise harms and maximise benefits. Avoiding its inherent risks makes a prima facie case against cesarean section without medical indication. However, as vaginal delivery can have unintended consequences, there is a need to balance the somewhat dissimilar risks and benefits. The principle of autonomy poses a challenge in case of disagreement between the pregnant woman and the physician. Improved communication aimed to enable better informed choice may overcome some instances of disagreement. The principle of justice prohibits unfair discrimination, and broadly favours optimising resource utilisation. Available evidence supports vaginal birth in uncomplicated term pregnancies as the standard of care. The principlist approach offered a useful framework for ethical analysis of cesarean delivery on maternal request, identified the rights and duties of those involved, and helped reach a conclusion, although conflict at the individual level may remain challenging.

  19. ICRP Publication 132: Radiological Protection from Cosmic Radiation in Aviation.

    PubMed

    Lochard, J; Bartlett, D T; Rühm, W; Yasuda, H; Bottollier-Depois, J-F

    2016-06-01

    In this publication, the International Commission on Radiological Protection (ICRP) provides updated guidance on radiological protection from cosmic radiation in aviation, taking into account the current ICRP system of radiological protection, the latest available data on exposures in aviation, and experience gained worldwide in the management of exposures in aviation. The publication describes the origins of cosmic radiation, how it exposes passengers and aircraft crew, the basic radiological protection principles that apply to this existing exposure situation, and the available protective actions. For implementation of the optimisation principle, the Commission recommends a graded approach proportionate to the level of exposure that may be received by individuals. The objective is to keep the exposure of the most exposed individuals to a reasonable level. The Commission also recommends that information be disseminated to raise awareness about cosmic radiation, and to support informed decisions among concerned stakeholders.

  20. Instrumental biosensors: new perspectives for the analysis of biomolecular interactions.

    PubMed

    Nice, E C; Catimel, B

    1999-04-01

    The use of instrumental biosensors in basic research to measure biomolecular interactions in real time is increasing exponentially. Applications include protein-protein, protein-peptide, DNA-protein, DNA-DNA, and lipid-protein interactions. Such techniques have been applied to, for example, antibody-antigen, receptor-ligand, signal transduction, and nuclear receptor studies. This review outlines the principles of two of the most commonly used instruments and highlights specific operating parameters that will assist in optimising experimental design, data generation, and analysis.

  1. Continuous subcutaneous insulin infusion therapy for Type 1 diabetes mellitus in children.

    PubMed

    Mavinkurve, M; Quinn, A; O'Gorman, C S

    2016-05-01

    Continuous subcutaneous insulin pump therapy (CSII or pump therapy) is a well-recognised treatment option for Type 1 diabetes mellitus (T1DM) in paediatrics. It is especially suited to children because it optimises control by improving flexibility across age-specific lifestyles. The NICE guidelines (2008) recognise that pump therapy is advantageous and that it should be utilised to deliver best practice. In Ireland, the National Clinical Program for Diabetes will increase the availability and uptake of CSII in children and thus more clinicians are likely to encounter children using CSII therapy. This is a narrative review which discusses the basic principles of pump therapy and focuses on aspects of practical management. Insulin pump management involves some basic yet important principles which optimise the care of diabetes in children. This review addresses the principles of insulin pump management in children which all health care professionals involved in caring for the child with diabetes, shoud be familiar with.

  2. On some properties of bone functional adaptation phenomenon useful in mechanical design.

    PubMed

    Nowak, Michał

    2010-01-01

    The paper discusses some unique properties of trabecular bone functional adaptation phenomenon, useful in mechanical design. On the basis of the biological process observations and the principle of constant strain energy density on the surface of the structure, the generic structural optimisation system has been developed. Such approach allows fulfilling mechanical theorem for the stiffest design, comprising the optimisations of size, shape and topology, using the concepts known from biomechanical studies. Also the biomimetic solution of multiple load problems is presented.

  3. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  4. An application of programmatic assessment for learning (PAL) system for general practice training.

    PubMed

    Schuwirth, Lambert; Valentine, Nyoli; Dilena, Paul

    2017-01-01

    Aim: Programmatic assessment for learning (PAL) is becoming more and more popular as a concept but its implementation is not without problems. In this paper we describe the design principles behind a PAL program in a general practice training context. Design principles: The PAL program was designed to optimise the meaningfulness of assessment information for the registrar and to make him/her use that information to self regulate their learning. The main principles in the program were cognitivist and transformative. The main cognitive principles we used were fostering the understanding of deep structures and stimulating transfer by making registrars constantly connect practice experiences with background knowledge. Ericsson's deliberate practice approach was built in with regard to the provision of feedback combined with Pintrich's model of self regulation. Mezirow's transformative learning and insights from social network theory on collaborative learning were used to support the registrars in their development to become GP professionals. Finally the principal of test enhanced learning was optimised. Epilogue: We have provided this example explain the design decisions behind our program, but not want to present our program as the solution to any given situation.

  5. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.

  6. Solving Fuzzy Fractional Differential Equations Using Zadeh's Extension Principle

    PubMed Central

    Ahmad, M. Z.; Hasan, M. K.; Abbasbandy, S.

    2013-01-01

    We study a fuzzy fractional differential equation (FFDE) and present its solution using Zadeh's extension principle. The proposed study extends the case of fuzzy differential equations of integer order. We also propose a numerical method to approximate the solution of FFDEs. To solve nonlinear problems, the proposed numerical method is then incorporated into an unconstrained optimisation technique. Several numerical examples are provided. PMID:24082853

  7. Optimisation of phenolic extraction from Averrhoa carambola pomace by response surface methodology and its microencapsulation by spray and freeze drying.

    PubMed

    Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata

    2015-03-15

    Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A CONCEPTUAL FRAMEWORK FOR MANAGING RADIATION DOSE TO PATIENTS IN DIAGNOSTIC RADIOLOGY USING REFERENCE DOSE LEVELS.

    PubMed

    Almén, Anja; Båth, Magnus

    2016-06-01

    The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Variations of leaf longevity in tropical moist forests predicted by a trait-driven carbon optimality model

    DOE PAGES

    Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph; ...

    2017-07-04

    Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less

  10. Variations of leaf longevity in tropical moist forests predicted by a trait-driven carbon optimality model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph

    Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less

  11. Systematic development and implementation of interventions to OPtimise Health Literacy and Access (Ophelia).

    PubMed

    Beauchamp, Alison; Batterham, Roy W; Dodson, Sarity; Astbury, Brad; Elsworth, Gerald R; McPhee, Crystal; Jacobson, Jeanine; Buchbinder, Rachelle; Osborne, Richard H

    2017-03-03

    The need for healthcare strengthening to enhance equity is critical, requiring systematic approaches that focus on those experiencing lesser access and outcomes. This project developed and tested the Ophelia (OPtimising HEalth LIteracy and Access) approach for co-design of interventions to improve health literacy and equity of access. Eight principles guided this development: Outcomes focused; Equity driven, Needs diagnosis, Co-design, Driven by local wisdom, Sustainable, Responsive and Systematically applied. We report the application of the Ophelia process where proof-of-concept was defined as successful application of the principles. Nine sites were briefed on the aims of the project around health literacy, co-design and quality improvement. The sites were rural/metropolitan, small/large hospitals, community health centres or municipalities. Each site identified their own priorities for improvement; collected health literacy data using the Health Literacy Questionnaire (HLQ) within the identified priority groups; engaged staff in co-design workshops to generate ideas for improvement; developed program-logic models; and implemented their projects using Plan-Do-Study-Act (PDSA) cycles. Evaluation included assessment of impacts on organisations, practitioners and service users, and whether the principles were applied. Sites undertook co-design workshops involving discussion of service user needs informed by HLQ (n = 813) and interview data. Sites generated between 21 and 78 intervention ideas and then planned their selected interventions through program-logic models. Sites successfully implemented interventions and refined them progressively with PDSA cycles. Interventions generally involved one of four pathways: development of clinician skills and resources for health literacy, engagement of community volunteers to disseminate health promotion messages, direct impact on consumers' health literacy, and redesign of existing services. Evidence of application of the principles was found in all sites. The Ophelia approach guided identification of health literacy issues at each participating site and the development and implementation of locally appropriate solutions. The eight principles provided a framework that allowed flexible application of the Ophelia approach and generation of a diverse set of interventions. Changes were observed at organisational, staff, and community member levels. The Ophelia approach can be used to generate health service improvements that enhance health outcomes and address inequity of access to healthcare.

  12. Is ICRP guidance on the use of reference levels consistent?

    PubMed

    Hedemann-Jensen, Per; McEwan, Andrew C

    2011-12-01

    In ICRP 103, which has replaced ICRP 60, it is stated that no fundamental changes have been introduced compared with ICRP 60. This is true except that the application of reference levels in emergency and existing exposure situations seems to be applied inconsistently, and also in the related publications ICRP 109 and ICRP 111. ICRP 103 emphasises that focus should be on the residual doses after the implementation of protection strategies in emergency and existing exposure situations. If possible, the result of an optimised protection strategy should bring the residual dose below the reference level. Thus the reference level represents the maximum acceptable residual dose after an optimised protection strategy has been implemented. It is not an 'off-the-shelf item' that can be set free of the prevailing situation. It should be determined as part of the process of optimising the protection strategy. If not, protection would be sub-optimised. However, in ICRP 103 some inconsistent concepts have been introduced, e.g. in paragraph 279 which states: 'All exposures above or below the reference level should be subject to optimisation of protection, and particular attention should be given to exposures above the reference level'. If, in fact, all exposures above and below reference levels are subject to the process of optimisation, reference levels appear superfluous. It could be considered that if optimisation of protection below a fixed reference level is necessary, then the reference level has been set too high at the outset. Up until the last phase of the preparation of ICRP 103 the concept of a dose constraint was recommended to constrain the optimisation of protection in all types of exposure situations. In the final phase, the term 'dose constraint' was changed to 'reference level' for emergency and existing exposure situations. However, it seems as if in ICRP 103 it was not fully recognised that dose constraints and reference levels are conceptually different. The use of reference levels in radiological protection is reviewed. It is concluded that the recommendations in ICRP 103 and related ICRP publications seem to be inconsistent regarding the use of reference levels in existing and emergency exposure situations.

  13. Dietary optimisation with omega-3 and omega-6 fatty acids for 12-23-month-old overweight and obese children in urban Jakarta.

    PubMed

    Cahyaningrum, Fitrianna; Permadhi, Inge; Ansari, Muhammad Ridwan; Prafiantini, Erfi; Rachman, Purnawati Hustina; Agustina, Rina

    2016-12-01

    Diets with a specific omega-6/omega-3 fatty acid ratio have been reported to have favourable effects in controlling obesity in adults. However, development a local-based diet by considering the ratio of these fatty acids for improving the nutritional status of overweight and obese children is lacking. Therefore, using linear programming, we developed an affordable optimised diet focusing on the ratio of omega- 6/omega-3 fatty acid intake for obese children aged 12-23 months. A crosssectional study was conducted in two subdistricts of East Jakarta involving 42 normal-weight and 29 overweight and obese children, grouped on the basis of their body mass index for-age Z scores and selected through multistage random sampling. A 24-h recall was performed for 3-nonconsecutive days to assess the children's dietary intake levels and food patterns. We conducted group and structured interviews as well as market surveys to identify food availability, accessibility and affordability. Three types of affordable optimised 7-day diet meal plans were developed on the basis of breastfeeding status. The optimised diet plan fulfilled energy and macronutrient intake requirements within the acceptable macronutrient distribution range. The omega-6/omega-3 fatty acid ratio in the children was between 4 and 10. Moreover, the micronutrient intake level was within the range of the recommended daily allowance or estimated average recommendation and tolerable upper intake level. The optimisation model used in this study provides a mathematical solution for economical diet meal plans that approximate the nutrient requirements for overweight and obese children.

  14. Optimising and communicating options for the control of invasive plant disease when there is epidemiological uncertainty.

    PubMed

    Cunniffe, Nik J; Stutt, Richard O J H; DeSimone, R Erik; Gottwald, Tim R; Gilligan, Christopher A

    2015-04-01

    Although local eradication is routinely attempted following introduction of disease into a new region, failure is commonplace. Epidemiological principles governing the design of successful control are not well-understood. We analyse factors underlying the effectiveness of reactive eradication of localised outbreaks of invading plant disease, using citrus canker in Florida as a case study, although our results are largely generic, and apply to other plant pathogens (as we show via our second case study, citrus greening). We demonstrate how to optimise control via removal of hosts surrounding detected infection (i.e. localised culling) using a spatially-explicit, stochastic epidemiological model. We show how to define optimal culling strategies that take account of stochasticity in disease spread, and how the effectiveness of disease control depends on epidemiological parameters determining pathogen infectivity, symptom emergence and spread, the initial level of infection, and the logistics and implementation of detection and control. We also consider how optimal culling strategies are conditioned on the levels of risk acceptance/aversion of decision makers, and show how to extend the analyses to account for potential larger-scale impacts of a small-scale outbreak. Control of local outbreaks by culling can be very effective, particularly when started quickly, but the optimum strategy and its performance are strongly dependent on epidemiological parameters (particularly those controlling dispersal and the extent of any cryptic infection, i.e. infectious hosts prior to symptoms), the logistics of detection and control, and the level of local and global risk that is deemed to be acceptable. A version of the model we developed to illustrate our methodology and results to an audience of stakeholders, including policy makers, regulators and growers, is available online as an interactive, user-friendly interface at http://www.webidemics.com/. This version of our model allows the complex epidemiological principles that underlie our results to be communicated to a non-specialist audience.

  15. Optimising and Communicating Options for the Control of Invasive Plant Disease When There Is Epidemiological Uncertainty

    PubMed Central

    Cunniffe, Nik J.; Stutt, Richard O. J. H.; DeSimone, R. Erik; Gottwald, Tim R.; Gilligan, Christopher A.

    2015-01-01

    Although local eradication is routinely attempted following introduction of disease into a new region, failure is commonplace. Epidemiological principles governing the design of successful control are not well-understood. We analyse factors underlying the effectiveness of reactive eradication of localised outbreaks of invading plant disease, using citrus canker in Florida as a case study, although our results are largely generic, and apply to other plant pathogens (as we show via our second case study, citrus greening). We demonstrate how to optimise control via removal of hosts surrounding detected infection (i.e. localised culling) using a spatially-explicit, stochastic epidemiological model. We show how to define optimal culling strategies that take account of stochasticity in disease spread, and how the effectiveness of disease control depends on epidemiological parameters determining pathogen infectivity, symptom emergence and spread, the initial level of infection, and the logistics and implementation of detection and control. We also consider how optimal culling strategies are conditioned on the levels of risk acceptance/aversion of decision makers, and show how to extend the analyses to account for potential larger-scale impacts of a small-scale outbreak. Control of local outbreaks by culling can be very effective, particularly when started quickly, but the optimum strategy and its performance are strongly dependent on epidemiological parameters (particularly those controlling dispersal and the extent of any cryptic infection, i.e. infectious hosts prior to symptoms), the logistics of detection and control, and the level of local and global risk that is deemed to be acceptable. A version of the model we developed to illustrate our methodology and results to an audience of stakeholders, including policy makers, regulators and growers, is available online as an interactive, user-friendly interface at http://www.webidemics.com/. This version of our model allows the complex epidemiological principles that underlie our results to be communicated to a non-specialist audience. PMID:25874622

  16. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  17. Tanlock loop noise reduction using an optimised phase detector

    NASA Astrophysics Data System (ADS)

    Al-kharji Al-Ali, Omar; Anani, Nader; Al-Qutayri, Mahmoud; Al-Araji, Saleh

    2013-06-01

    This article proposes a time-delay digital tanlock loop (TDTL), which uses a new phase detector (PD) design that is optimised for noise reduction making it amenable for applications that require wide lock range without sacrificing the level of noise immunity. The proposed system uses an improved phase detector design which uses two phase detectors; one PD is used to optimise the noise immunity whilst the other is used to control the acquisition time of the TDTL system. Using the modified phase detector it is possible to reduce the second- and higher-order harmonics by at least 50% compared with the conventional TDTL system. The proposed system was simulated and tested using MATLAB/Simulink using frequency step inputs and inputs corrupted with varying levels of harmonic distortion. A hardware prototype of the system was implemented using a field programmable gate array (FPGA). The practical and simulation results indicate considerable improvement in the noise performance of the proposed system over the conventional TDTL architecture.

  18. ICRP publication 121: radiological protection in paediatric diagnostic and interventional radiology.

    PubMed

    Khong, P-L; Ringertz, H; Donoghue, V; Frush, D; Rehani, M; Appelgate, K; Sanchez, R

    2013-04-01

    Paediatric patients have a higher average risk of developing cancer compared with adults receiving the same dose. The longer life expectancy in children allows more time for any harmful effects of radiation to manifest, and developing organs and tissues are more sensitive to the effects of radiation. This publication aims to provide guiding principles of radiological protection for referring clinicians and clinical staff performing diagnostic imaging and interventional procedures for paediatric patients. It begins with a brief description of the basic concepts of radiological protection, followed by the general aspects of radiological protection, including principles of justification and optimisation. Guidelines and suggestions for radiological protection in specific modalities - radiography and fluoroscopy, interventional radiology, and computed tomography - are subsequently covered in depth. The report concludes with a summary and recommendations. The importance of rigorous justification of radiological procedures is emphasised for every procedure involving ionising radiation, and the use of imaging modalities that are non-ionising should always be considered. The basic aim of optimisation of radiological protection is to adjust imaging parameters and institute protective measures such that the required image is obtained with the lowest possible dose of radiation, and that net benefit is maximised to maintain sufficient quality for diagnostic interpretation. Special consideration should be given to the availability of dose reduction measures when purchasing new imaging equipment for paediatric use. One of the unique aspects of paediatric imaging is with regards to the wide range in patient size (and weight), therefore requiring special attention to optimisation and modification of equipment, technique, and imaging parameters. Examples of good radiographic and fluoroscopic technique include attention to patient positioning, field size and adequate collimation, use of protective shielding, optimisation of exposure factors, use of pulsed fluoroscopy, limiting fluoroscopy time, etc. Major paediatric interventional procedures should be performed by experienced paediatric interventional operators, and a second, specific level of training in radiological protection is desirable (in some countries, this is mandatory). For computed tomography, dose reduction should be optimised by the adjustment of scan parameters (such as mA, kVp, and pitch) according to patient weight or age, region scanned, and study indication (e.g. images with greater noise should be accepted if they are of sufficient diagnostic quality). Other strategies include restricting multiphase examination protocols, avoiding overlapping of scan regions, and only scanning the area in question. Up-to-date dose reduction technology such as tube current modulation, organ-based dose modulation, auto kV technology, and iterative reconstruction should be utilised when appropriate. It is anticipated that this publication will assist institutions in encouraging the standardisation of procedures, and that it may help increase awareness and ultimately improve practices for the benefit of patients. Copyright © 2012. Published by Elsevier Ltd.

  19. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    NASA Astrophysics Data System (ADS)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  20. Analysis of power gating in different hierarchical levels of 2MB cache, considering variation

    NASA Astrophysics Data System (ADS)

    Jafari, Mohsen; Imani, Mohsen; Fathipour, Morteza

    2015-09-01

    This article reintroduces power gating technique in different hierarchical levels of static random-access memory (SRAM) design including cell, row, bank and entire cache memory in 16 nm Fin field effect transistor. Different structures of SRAM cells such as 6T, 8T, 9T and 10T are used in design of 2MB cache memory. The power reduction of the entire cache memory employing cell-level optimisation is 99.7% with the expense of area and other stability overheads. The power saving of the cell-level optimisation is 3× (1.2×) higher than power gating in cache (bank) level due to its superior selectivity. The access delay times are allowed to increase by 4% in the same energy delay product to achieve the best power reduction for each supply voltages and optimisation levels. The results show the row-level power gating is the best for optimising the power of the entire cache with lowest drawbacks. Comparisons of cells show that the cells whose bodies have higher power consumption are the best candidates for power gating technique in row-level optimisation. The technique has the lowest percentage of saving in minimum energy point (MEP) of the design. The power gating also improves the variation of power in all structures by at least 70%.

  1. Optimisation of strain selection in evolutionary continuous culture

    NASA Astrophysics Data System (ADS)

    Bayen, T.; Mairet, F.

    2017-12-01

    In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.

  2. An illustration of new methods in machine condition monitoring, Part I: stochastic resonance

    NASA Astrophysics Data System (ADS)

    Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.

    2017-05-01

    There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.

  3. Characterization, optimisation and process robustness of a co-processed mannitol for the development of orally disintegrating tablets.

    PubMed

    Soh, Josephine Lay Peng; Grachet, Maud; Whitlock, Mark; Lukas, Timothy

    2013-02-01

    This is a study to fully assess a commercially available co-processed mannitol for its usefulness as an off-the-shelf excipient for developing orally disintegrating tablets (ODTs) by direct compression on a pilot scale (up to 4 kg). This work encompassed material characterization, formulation optimisation and process robustness. Overall, this co-processed mannitol possessed favourable physical attributes including low hygroscopicity and compactibility. Two design-of-experiments (DoEs) were used to screen and optimise the placebo formulation. Xylitol and crospovidone concentrations were found to have the most significant impact on disintegration time (p < 0.05). Higher xylitol concentrations retarded disintegration. Avicel PH102 promoted faster disintegration than PH101, at higher levels of xylitol. Without xylitol, higher crospovidone concentrations yielded faster disintegration and reduced tablet friability. Lubrication sensitivity studies were later conducted at two fill loads, three levels for lubricant concentration and number of blend rotations. Even at 75% fill load, the design space plot showed that 1.5% lubricant and 300 blend revolutions were sufficient to manufacture ODTs with ≤ 0.1% friability and disintegrated within 15 s. This study also describes results using a modified disintegration method based on the texture analyzer as an alternative to the USP method.

  4. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    NASA Astrophysics Data System (ADS)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  5. Student Recruitment and Relationship Marketing--Convergence or Contortion?

    ERIC Educational Resources Information Center

    Beneke, J.

    2011-01-01

    This article investigates the intersection of student recruitment and relationship marketing in the public high education sector. They key objectives of this analysis are to understand if a strategic fit exists and whether South African institutions are indeed embracing the principles of relationship marketing in order to optimise their student…

  6. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  7. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  8. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  9. A Pilot Study of the Epistemological Beliefs of Students in Industrial-Technical Fields

    ERIC Educational Resources Information Center

    Zinn, Bernd

    2012-01-01

    An investigation of the epistemological beliefs of apprentices in the commercial engineering sector is of interest for vocational training, both from the point of view of optimising vocational didactic processes as well as in terms of communicating suitable knowledge based beliefs about principles and performance in the commercial engineering…

  10. Critical factors for optimising skill-grade-mix based on principles of Lean Management - a qualitative substudy

    PubMed

    Inauen, Alice; Rettke, Horst; Fridrich, Annemarie; Spirig, Rebecca; Bauer, Georg F

    2017-01-01

    Background: Due to scarce resources in health care, staff deployment has to meet the demands. To optimise skill-grade-mix, a Swiss University Hospital initiated a project based on principles of Lean Management. The project team accompanied each participating nursing department and scientifically evaluated the results of the project. Aim: The aim of this qualitative sub-study was to identify critical success factors of this project. Method: In four focus groups, participants discussed their experience of the project. Recruitment was performed from departments assessing the impact of the project retrospectively either positive or critical. In addition, the degree of direct involvement in the project served as a distinguishing criterion. Results: While the degree of direct involvement in the project was not decisive, conflicting opinions and experiences appeared in the groups with more positive or critical project evaluation. Transparency, context and attitude proved critical for the project’s success. Conclusions: Project managers should ensure transparency of the project’s progress and matching of the project structure with local conditions in order to support participants in their critical or positive attitude towards the project.

  11. Multidetector CT radiation dose optimisation in adults: short- and long-term effects of a clinical audit.

    PubMed

    Tack, Denis; Jahnen, Andreas; Kohler, Sarah; Harpes, Nico; De Maertelaer, Viviane; Back, Carlo; Gevenois, Pierre Alain

    2014-01-01

    To report short- and long-term effects of an audit process intended to optimise the radiation dose from multidetector row computed tomography (MDCT). A survey of radiation dose from all eight MDCT departments in the state of Luxembourg performed in 2007 served as baseline, and involved the most frequently imaged regions (head, sinus, cervical spine, thorax, abdomen, and lumbar spine). CT dose index volume (CTDIvol), dose-length product per acquisition (DLP/acq), and DLP per examination (DLP/exa) were recorded, and their mean, median, 25th and 75th percentiles compared. In 2008, an audit conducted in each department helped to optimise doses. In 2009 and 2010, two further surveys evaluated the audit's impact on the dose delivered. Between 2007 and 2009, DLP/exa significantly decreased by 32-69 % for all regions (P < 0.001) except the lumbar spine (5 %, P = 0.455). Between 2009 and 2010, DLP/exa significantly decreased by 13-18 % for sinus, cervical and lumbar spine (P ranging from 0.016 to less than 0.001). Between 2007 and 2010, DLP/exa significantly decreased for all regions (18-75 %, P < 0.001). Collective dose decreased by 30 % and the 75th percentile (diagnostic reference level, DRL) by 20-78 %. The audit process resulted in long-lasting dose reduction, with DRLs reduced by 20-78 %, mean DLP/examination by 18-75 %, and collective dose by 30 %. • External support through clinical audit may optimise default parameters of routine CT. • Reduction of 75th percentiles used as reference diagnostic levels is 18-75 %. • The effect of this audit is sustainable over time. • Dose savings through optimisation can be added to those achievable through CT.

  12. Islam and the four principles of medical ethics.

    PubMed

    Mustafa, Yassar

    2014-07-01

    The principles underpinning Islam's ethical framework applied to routine clinical scenarios remain insufficiently understood by many clinicians, thereby unfortunately permitting the delivery of culturally insensitive healthcare.This paper summarises the foundations of the Islamic ethical theory, elucidating the principles and methodology employed by the Muslim jurist in deriving rulings in the field of medical ethics. The four-principles approach, as espoused by Beauchamp and Childress, is also interpreted through the prism of Islamic ethical theory. Each of the four principles (beneficence, nonmaleficence,justice and autonomy) is investigated in turn, looking in particular at the extent to which each is rooted in the Islamic paradigm. This will provide an important insight into Islamic medical ethics, enabling the clinician to have a better informed discussion with the Muslim patient. It will also allow for a higher degree of concordance in consultations and consequently optimise culturally sensitive healthcare delivery.

  13. ICRP PUBLICATION 122: radiological protection in geological disposal of long-lived solid radioactive waste.

    PubMed

    Weiss, W; Larsson, C-M; McKenney, C; Minon, J-P; Mobbs, S; Schneider, T; Umeki, H; Hilden, W; Pescatore, C; Vesterlind, M

    2013-06-01

    This report updates and consolidates previous recommendations of the International Commission on Radiological Protection (ICRP) related to solid waste disposal (ICRP, 1985, 1997b, 1998). The recommendations given apply specifically to geological disposal of long-lived solid radioactive waste. The report explains how the ICRP system of radiological protection described in Publication 103 (ICRP, 2007) can be applied in the context of the geological disposal of long-lived solid radioactive waste. Although the report is written as a standalone document, previous ICRP recommendations not dealt with in depth in the report are still valid. The 2007 ICRP system of radiological protection evolves from the previous process-based protection approach relying on the distinction between practices and interventions by moving to an approach based on the distinction between three types of exposure situation: planned, emergency and existing. The Recommendations maintains the Commission's three fundamental principles of radiological protection namely: justification, optimisation of protection and the application of dose limits. They also maintain the current individual dose limits for effective dose and equivalent dose from all regulated sources in planned exposure situations. They re-enforce the principle of optimisation of radiological protection, which applies in a similar way to all exposure situations, subject to restrictions on individual doses: constraints for planned exposure situations, and reference levels for emergency and existing exposure situations. The Recommendations also include an approach for developing a framework to demonstrate radiological protection of the environment. This report describes the different stages in the life time of a geological disposal facility, and addresses the application of relevant radiological protection principles for each stage depending on the various exposure situations that can be encountered. In particular, the crucial factor that influences the application of the protection system over the different phases in the life time of a disposal facility is the level of oversight or 'watchful care' that is present. The level of oversight affects the capability to control the source, i.e. the waste and the repository, and to avoid or reduce potential exposures. Three main time frames are considered: time of direct oversight, when the disposal facility is being implemented and is under active supervision; time of indirect oversight, when the disposal facility is sealed and oversight is being exercised by regulators or special administrative bodies or society at large to provide additional assurance on behalf of society; and time of no oversight, when oversight is no longer exercised in case memory of the disposal facility is lost. Copyright © 2013. Published by Elsevier Ltd.

  14. Radiological protection from radioactive waste management in existing exposure situations resulting from a nuclear accident.

    PubMed

    Sugiyama, Daisuke; Hattori, Takatoshi

    2013-01-01

    In environmental remediation after nuclear accidents, radioactive wastes have to be appropriately managed in existing exposure situations with contamination resulting from the emission of radionuclides by such accidents. In this paper, a framework of radiation protection from radioactive waste management in existing exposure situations for application to the practical and reasonable waste management in contaminated areas, referring to related ICRP recommendations was proposed. In the proposed concept, intermediate reference levels for waste management are adopted gradually according to the progress of the reduction in the existing ambient dose in the environment on the basis of the principles of justification and optimisation by taking into account the practicability of the management of radioactive waste and environmental remediation. It is essential to include the participation of relevant stakeholders living in existing exposure situations in the selection of reference levels for the existing ambient dose and waste management.

  15. Radiological protection from radioactive waste management in existing exposure situations resulting from a nuclear accident

    PubMed Central

    Sugiyama, Daisuke; Hattori, Takatoshi

    2013-01-01

    In environmental remediation after nuclear accidents, radioactive wastes have to be appropriately managed in existing exposure situations with contamination resulting from the emission of radionuclides by such accidents. In this paper, a framework of radiation protection from radioactive waste management in existing exposure situations for application to the practical and reasonable waste management in contaminated areas, referring to related ICRP recommendations was proposed. In the proposed concept, intermediate reference levels for waste management are adopted gradually according to the progress of the reduction in the existing ambient dose in the environment on the basis of the principles of justification and optimisation by taking into account the practicability of the management of radioactive waste and environmental remediation. It is essential to include the participation of relevant stakeholders living in existing exposure situations in the selection of reference levels for the existing ambient dose and waste management. PMID:22719047

  16. Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels

    NASA Astrophysics Data System (ADS)

    Vandersickel, A.; Wright, Y. M.; Boulouchos, K.

    2013-12-01

    Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.

  17. Beyond Synthesis: Augmenting Systematic Review Procedures with Practical Principles to Optimise Impact and Uptake in Educational Policy and Practice

    ERIC Educational Resources Information Center

    Green, Chris; Taylor, Celia; Buckley, Sharon; Hean, Sarah

    2016-01-01

    Whilst systematic reviews, meta-analyses and other forms of synthesis are considered amongst the most valuable forms of research evidence, their limited impact on educational policy and practice has been criticised. In this article, we analyse why systematic reviews do not benefit users of evidence more consistently and suggest how review teams…

  18. Four Scenarios for Determining the Size and Reusability of Learning Objects

    ERIC Educational Resources Information Center

    Schoonenboom, Judith

    2012-01-01

    The best method for determining the size of learning objects (LOs) so as to optimise their reusability has been a topic of debate for years now. Although there appears to be agreement on basic assumptions, developed guidelines and principles are often in conflict. This study shows that this confusion stems from the fact that in the literature,…

  19. Spaced Learning: The Design, Feasibility and Optimisation of SMART Spaces. Evaluation Report and Executive Summary

    ERIC Educational Resources Information Center

    O'Hare, Liam; Stark, Patrick; McGuinness, Carol; Biggart, Andy; Thurston, Allen

    2017-01-01

    This report describes the development and pilot evaluation of SMART Spaces. This programme aims to boost GCSE science outcomes by applying the principle that information is more easily learnt when it is repeated multiple times, with time passing between the repetitions. This approach is known as "spaced learning" and is contrasted with a…

  20. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  1. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  2. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Cultural-based particle swarm for dynamic optimisation problems

    NASA Astrophysics Data System (ADS)

    Daneshyari, Moayed; Yen, Gary G.

    2012-07-01

    Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.

  4. Optimisation of decentralisation for effective Disaster Risk Reduction (DRR) through the case study of Indonesia

    NASA Astrophysics Data System (ADS)

    Grady, A.; Makarigakis, A.; Gersonius, B.

    2015-09-01

    This paper investigates how to optimise decentralisation for effective disaster risk reduction (DRR) in developing states. There is currently limited literature on empirical analysis of decentralisation for DRR. This paper evaluates decentralised governance for DRR in the case study of Indonesia and provides recommendations for its optimisation. Wider implications are drawn to optimise decentralisation for DRR in developing states more generally. A framework to evaluate the institutional and policy setting was developed which necessitated the use of a gap analysis, desk study and field investigation. Key challenges to decentralised DRR include capacity gaps at lower levels, low compliance with legislation, disconnected policies, issues in communication and coordination and inadequate resourcing. DRR authorities should lead coordination and advocacy on DRR. Sustainable multistakeholder platforms and civil society organisations should fill the capacity gap at lower levels. Dedicated and regulated resources for DRR should be compulsory.

  5. The development of response surface pathway design to reduce animal numbers in toxicity studies

    PubMed Central

    2014-01-01

    Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560

  6. DryLab® optimised two-dimensional high performance liquid chromatography for differentiation of ephedrine and pseudoephedrine based methamphetamine samples.

    PubMed

    Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A

    2014-11-01

    In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.

  7. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  8. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  9. Design and optimisation of wheel-rail profiles for adhesion improvement

    NASA Astrophysics Data System (ADS)

    Liu, B.; Mei, T. X.; Bruni, S.

    2016-03-01

    This paper describes a study for the optimisation of the wheel profile in the wheel-rail system to increase the overall level of adhesion available at the contact interface, in particular to investigate how the wheel and rail profile combination may be designed to ensure the improved delivery of tractive/braking forces even in poor contact conditions. The research focuses on the geometric combination of both wheel and rail profiles to establish how the contact interface may be optimised to increase the adhesion level, but also to investigate how the change in the property of the contact mechanics at the wheel-rail interface may also lead to changes in the vehicle dynamic behaviour.

  10. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  11. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  12. Protocol for a multicentred randomised controlled trial investigating the use of personalised golimumab dosing tailored to inflammatory load in ulcerative colitis: the GOAL-ARC study (GLM dose Optimisation to Adequate Levels to Achieve Response in Colitis) led by the INITIAtive group (NCT 0268772)

    PubMed Central

    Sheridan, Juliette; Coe, Carol Ann; Doran, Peter; Egan, Laurence; Cullen, Garret; Kevans, David; Leyden, Jan; Galligan, Marie; O’Toole, Aoibhlinn; McCarthy, Jane; Doherty, Glen

    2018-01-01

    Introduction Ulcerative colitis (UC) is a chronic inflammatory bowel disease (IBD), often leading to an impaired quality of life in affected patients. Current treatment modalities include antitumour necrosis factor (anti-TNF) monoclonal antibodies (mABs) including infliximab, adalimumab and golimumab (GLM). Several recent retrospective and prospective studies have demonstrated that fixed dosing schedules of anti-TNF agents often fails to consistently achieve adequate circulating therapeutic drug levels (DL) with consequent risk of immunogenicity treatment failure and potential risk of hospitalisation and colectomy in patients with UC. The design of GLM dose Optimisation to Adequate Levels to Achieve Response in Colitis aims to address the impact of dose escalation of GLM immediately following induction and during the subsequent maintenance phase in response to suboptimal DL or persisting inflammatory burden as represented by raised faecal calprotectin (FCP). Aim The primary aim of the study is to ascertain if monitoring of FCP and DL of GLM to guide dose optimisation (during maintenance) improves rates of patient continuous clinical response and reduces disease activity in UC. Methods and analysis A randomised, multicentred two-arm trial studying the effect of dose optimisation of GLM based on FCP and DL versus treatment as per SMPC. Eligible patients will be randomised in a 1:1 ratio to 1 of 2 treatment groups and shall be treated over a period of 46 weeks. Ethics and dissemination The study protocol was approved by the Research Ethics committee of St. Vincent’s University Hospital. The results will be published in a peer-reviewed journal and shared with the worldwide medical community. Trial registration numbers EudraCT number: 2015-004724-62; Clinicaltrials.gov Identifier: NCT0268772; Pre-results. PMID:29379609

  13. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    NASA Astrophysics Data System (ADS)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  14. Management of radon: a review of ICRP recommendations.

    PubMed

    Vaillant, Ludovic; Bataille, Céline

    2012-09-01

    This article proposes a review of past and current ICRP publications dealing with the management of radon exposures. Its main objective is to identify and discuss the driving factors that have been used by the Commission during the last 50 years so as to better appreciate current issues regarding radon exposure management. The analysis shows that major evolutions took place in very recent years. As far as the management of radon exposures is concerned, ICRP recommended, until ICRP Publication 103 (ICRP 2007 ICRP Publication 103; Ann. ICRP 37), to use action levels and to consider only exposures above these levels. The Commission has reviewed its approach and now proposes to manage any radon exposure through the application of the optimisation principle and associated reference levels. As far as the assessment of the radon risk is concerned, it appears that the successive changes made by ICRP did not have a strong impact on the values of radon gas concentration recommended as action levels either in dwellings or in workplaces. The major change occurred in late 2009 with the publication of the ICRP Statement on Radon, which acknowledged that the radon risk has been underestimated by a factor of 2, thus inducing a major revision of radon reference levels.

  15. An integrated and dynamic optimisation model for the multi-level emergency logistics network in anti-bioterrorism system

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Zhao, Lindu

    2012-08-01

    Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.

  16. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  17. Stochastic optimisation of water allocation on a global scale

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.

    2014-05-01

    Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.

  18. Optimised analytical models of the dielectric properties of biological tissue.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin

    2017-05-01

    The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Predicting emergency coronary artery bypass graft following PCI: application of a computational model to refer patients to hospitals with and without onsite surgical backup

    PubMed Central

    Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S

    2015-01-01

    Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738

  20. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  1. The robust model predictive control based on mixed H2/H∞ approach with separated performance formulations and its ISpS analysis

    NASA Astrophysics Data System (ADS)

    Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong

    2017-12-01

    In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.

  2. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.

  3. Effect of preventive (beta blocker) treatment, behavioural migraine management, or their combination on outcomes of optimised acute treatment in frequent migraine: randomised controlled trial.

    PubMed

    Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina

    2010-09-29

    To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.

  4. Electric double-layer capacitance between an ionic liquid and few-layer graphene.

    PubMed

    Uesugi, Eri; Goto, Hidenori; Eguchi, Ritsuko; Fujiwara, Akihiko; Kubozono, Yoshihiro

    2013-01-01

    Ionic-liquid gates have a high carrier density due to their atomically thin electric double layer (EDL) and extremely large geometrical capacitance Cg. However, a high carrier density in graphene has not been achieved even with ionic-liquid gates because the EDL capacitance CEDL between the ionic liquid and graphene involves the series connection of Cg and the quantum capacitance Cq, which is proportional to the density of states. We investigated the variables that determine CEDL at the molecular level by varying the number of graphene layers n and thereby optimising Cq. The CEDL value is governed by Cq at n < 4, and by Cg at n > 4. This transition with n indicates a composite nature for CEDL. Our finding clarifies a universal principle that determines capacitance on a microscopic scale, and provides nanotechnological perspectives on charge accumulation and energy storage using an ultimately thin capacitor.

  5. Electric double-layer capacitance between an ionic liquid and few-layer graphene

    PubMed Central

    Uesugi, Eri; Goto, Hidenori; Eguchi, Ritsuko; Fujiwara, Akihiko; Kubozono, Yoshihiro

    2013-01-01

    Ionic-liquid gates have a high carrier density due to their atomically thin electric double layer (EDL) and extremely large geometrical capacitance Cg. However, a high carrier density in graphene has not been achieved even with ionic-liquid gates because the EDL capacitance CEDL between the ionic liquid and graphene involves the series connection of Cg and the quantum capacitance Cq, which is proportional to the density of states. We investigated the variables that determine CEDL at the molecular level by varying the number of graphene layers n and thereby optimising Cq. The CEDL value is governed by Cq at n < 4, and by Cg at n > 4. This transition with n indicates a composite nature for CEDL. Our finding clarifies a universal principle that determines capacitance on a microscopic scale, and provides nanotechnological perspectives on charge accumulation and energy storage using an ultimately thin capacitor. PMID:23549208

  6. ICRP draft publication on 'radiological protection against radon exposure'.

    PubMed

    Lecomte, J-F

    2014-07-01

    To control the main part of radon exposure, the Main Commission of the International Commission on Radiological Protection (ICRP) recommends an integrated approach focused as far as possible on the management of the building or location in which radon exposure occurs whatever the purpose of the building and the types of its occupants. This approach is based on the optimisation principle and a graded approach according to the degree of responsibilities at stake, notably in workplace, as well as the level of ambition of the national authorities. The report which is being developed by the Committee 4 is considering the recently consolidated ICRP general recommendations, the new scientific knowledge about the radon risk and the experience gained by many organisations and countries in the control of radon exposure. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Topology optimisation for natural convection problems

    NASA Astrophysics Data System (ADS)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-12-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  8. Engaging Homeless Individuals in Discussion about Their Food Experiences to Optimise Wellbeing: A Pilot Study

    ERIC Educational Resources Information Center

    Pettinger, Clare; Parsons, Julie M.; Cunningham, Miranda; Withers, Lyndsey; D'Aprano, Gia; Letherby, Gayle; Sutton, Carole; Whiteford, Andrew; Ayres, Richard

    2017-01-01

    Objective: High levels of social and economic deprivation are apparent in many UK cities, where there is evidence of certain "marginalised" communities suffering disproportionately from poor nutrition, threatening health. Finding ways to engage with these communities is essential to identify strategies to optimise wellbeing and life…

  9. Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach

    NASA Astrophysics Data System (ADS)

    Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.

    2017-10-01

    ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.

  10. Fractures in sport: Optimising their management and outcome

    PubMed Central

    Robertson, Greg AJ; Wood, Alexander M

    2015-01-01

    Fractures in sport are a specialised cohort of fracture injuries, occurring in a high functioning population, in which the goals are rapid restoration of function and return to play with the minimal symptom profile possible. While the general principles of fracture management, namely accurate fracture reduction, appropriate immobilisation and timely rehabilitation, guide the treatment of these injuries, management of fractures in athletic populations can differ significantly from those in the general population, due to the need to facilitate a rapid return to high demand activities. However, despite fractures comprising up to 10% of all of sporting injuries, dedicated research into the management and outcome of sport-related fractures is limited. In order to assess the optimal methods of treating such injuries, and so allow optimisation of their outcome, the evidence for the management of each specific sport-related fracture type requires assessment and analysis. We present and review the current evidence directing management of fractures in athletes with an aim to promote valid innovative methods and optimise the outcome of such injuries. From this, key recommendations are provided for the management of the common fracture types seen in the athlete. Six case reports are also presented to illustrate the management planning and application of sport-focussed fracture management in the clinical setting. PMID:26716081

  11. Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.

    PubMed

    Kramar, A; Turk, S; Vrecer, F

    2003-04-30

    The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.

  12. [Shortage of doctors in psychiatric hospitals--providing for the future by reorganizing medical services].

    PubMed

    Jordan, Wolfgang; Adler, Lothar; Bleich, Stefan; von Einsiedel, Regina; Falkai, Peter; Grosskopf, Volker; Hauth, Iris; Steiner, Johann; Cohrs, Stefan

    2011-11-01

    Increasing psychiatric disorder treatment need, increased work load, changes in the working hour regulations, the nation-wide shortage of physicians, efficiency principle and economisation can necessitate a reorganisation of medical services. The essential steps and instruments of process optimisation in medical services for a psychiatric clinic are elucidated and discussed in the context of demographic changes, generation change, and a new concept of values. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  14. Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area

    NASA Astrophysics Data System (ADS)

    Khare, Vikas; Nema, Savita; Baredar, Prashant

    2017-04-01

    This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.

  15. A practical guide for nurses in diluent selection for subcutaneous infusion using a syringe driver.

    PubMed

    McLeod, Fiona; Flowers, Charne

    2006-12-01

    Appropriate diluent selection in continuous subcutaneous infusion optimises symptom management and client well-being. The responsibility of diluent selection is commonly one of the attending nurse. This paper was developed with the intention of providing nurses with practical instruction for diluent selection when preparing medications for administration subcutaneously using a syringe driver. A literature review was undertaken of published journal databases and published guidelines sites. Recommendations regarding diluent choice were reviewed in two iterations by an expert panel of palliative care nurse clinicians. The principles for diluent selection are presented. They are based primarily on expert opinion level of evidence given a lack of primary research evidence in the area of diluent selection. There is a pressing need for manufacturers' guidance on diluent selection and independent research to establish the impact of diluents on drug and drug combinations when using syringe drivers. Until such time that this evidence is available to guide practice, clinicians need to be trained to inspect solutions and assess the effectiveness of the medication in controlling symptoms. The capacity of this paper to provide practical instruction has been limited by the lack of rigorous evidence available, and indeed, the process of developing this guide identified perhaps more questions than answers available at the present time.

  16. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  17. Global nuclear industry views: challenges arising from the evolution of the optimisation principle in radiological protection.

    PubMed

    Saint-Pierre, S

    2012-01-01

    Over the last few decades, the steady progress achieved in reducing planned exposures of both workers and the public has been admirable in the nuclear sector. However, the disproportionate focus on tiny public exposures and radioactive discharges associated with normal operations came at a high price, and the quasi-denial of a risk of major accident and related weaknesses in emergency preparedness and response came at an even higher price. Fukushima has unfortunately taught us that radiological protection (RP) for emergency and post-emergency situations can be much more than a simple evacuation that lasts 24-48 h, with people returning safely to their homes soon afterwards. On optimisation of emergency and post-emergency exposures, the only 'show in town' in terms of international RP policy improvements has been the issuance of the 2007 Recommendations of the International Commission on Radiological Protection (ICRP). However, no matter how genuine these improvements are, they have not been 'road tested' on the practical reality of severe accidents. Post-Fukushima, there is a compelling case to review the practical adequacy of key RP notions such as optimisation, evacuation, sheltering, and reference levels for workers and the public, and to amend these notions with a view to making the international RP system more useful in the event of a severe accident. On optimisation of planned exposures, the reality is that, nowadays, margins for further reductions of public doses in the nuclear sector are very small, and the smaller the dose, the greater the extra effort needed to reduce the dose further. If sufficient caution is not exercised in the use of RP notions such as dose constraints, there is a real risk of challenging nuclear power technologies beyond safety reasons. For nuclear new build, it is the optimisation of key operational parameters of nuclear power technologies (not RP) that is of paramount importance to improve their overall efficiency. In pursuing further improvements in the international RP system, it should be clearly borne in mind that the system is generally based on protection against the risk of cancer and hereditary diseases. The system also protects against deterministic non-cancer effects on tissues and organs. In seeking refinements of such protective notions, ICRP is invited to pay increased attention to the fact that a continued balance must be struck between beneficial activities that cause exposures and protection. The global nuclear industry is committed to help overcome these key RP issues as part of the RP community's upcoming international deliberations towards a more efficient international RP system. Copyright © 2012. Published by Elsevier Ltd.

  18. 'Admit voluntary, schedule if tries to leave': placing Mental Health Acts in the context of mental health law and human rights.

    PubMed

    Wand, Anne; Wand, Timothy

    2013-04-01

    Most postgraduate training for clinicians in Australia and New Zealand regarding mental health legislation focuses on the relevant Mental Health Acts (MHAs) rather than the broader principles of mental health law. Key concepts include treatment in the least restrictive environment, voluntary access to mental health services, treatability, reciprocity and due process. Lack of awareness of these principles may result in a more risk-averse interpretation of MHAs, which is inconsistent with the spirit of mental health law and the promotion of human rights. The aim of this paper is to present some fundamental principles of mental health law, which are essential to proper clinical application of MHAs, and to demonstrate why they should form part of the curriculum for psychiatry training and continuing professional development for psychiatrists. A sound understanding of the principles of mental health law is essential for all clinicians who may be enacting aspects of MHAs. This provides the necessary platform to safeguard human rights and optimise the care of people with a mental illness.

  19. Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.

    PubMed

    Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip

    2015-11-01

    The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.

  20. Cosmic radiation in aviation: radiological protection of Air France aircraft crew.

    PubMed

    Desmaris, G

    2016-06-01

    Cosmic radiation in aviation has been a concern since the 1960s, and measurements have been taken for several decades by Air France. Results show that aircraft crew generally receive 3-4 mSv y(-1) for 750 boarding hours. Compliance with the trigger level of 6 mSv y(-1) is achieved by route selection. Work schedules can be developed for pregnant pilots to enable the dose to the fetus to be kept below 1 mSv. Crew members are informed of their exposition and the potential health impact. The upcoming International Commission on Radiological Protection (ICRP) report on cosmic radiation in aviation will provide an updated guidance. A graded approach proportionate with the time of exposure is recommended to implement the optimisation principle. The objective is to keep exposures of the most exposed aircraft members to reasonable levels. ICRP also recommends that information about cosmic radiation be disseminated, and that awareness about cosmic radiation be raised in order to favour informed decision-making by all concerned stakeholders. © The International Society for Prosthetics and Orthotics.

  1. Multiple response optimisation of processing and formulation parameters of pH sensitive sustained release pellets of capecitabine for targeting colon.

    PubMed

    Pandey, Sonia; Swamy, S M Vijayendra; Gupta, Arti; Koli, Akshay; Patel, Swagat; Maulvi, Furqan; Vyas, Bhavin

    2018-04-29

    To optimise the Eudragit/Surelease ® -coated pH-sensitive pellets for controlled and target drug delivery to the colon tissue and to avoid frequent high dosing and associated side effects which restrict its use in the colorectal-cancer therapy. The pellets were prepared using extrusion-spheronisation technique. Box-Behnken and 3 2 full factorial designs were applied to optimise the process parameters [extruder sieve size, spheroniser-speed, and spheroniser-time] and the coating levels [%w/v of Eudragit S100/Eudragit-L100 and Surelease ® ], respectively, to achieve the smooth optimised size pellets with sustained drug delivery without prior drug release in upper gastrointestinal tract (GIT). The design proposed the optimised batch by selecting independent variables at; extruder sieve size (X 1  = 1 mm), spheroniser speed (X 2  = 900 revolutions per minute, rpm), and spheroniser time (X 3  = 15 min) to achieve pellet size of 0.96 mm, aspect ratio of 0.98, and roundness 97.42%. The 16%w/v coating strength of Surelease ® and 13%w/v coating strength of Eudragit showed pH-dependent sustained release up to 22.35 h (t 99% ). The organ distribution study showed the absence of the drug in the upper part of GIT tissue and the presence of high level of capecitabine in the caecum and colon tissue. Thus, the presence of Eudragit coat prevent the release of drug in stomach and the inner Surelease ® coat showed sustained drug release in the colon tissue. The study demonstrates the potential of optimised Eudragit/Surelease ® -coated capecitabine-pellets for effective colon-targeted delivery system to avoid frequent high dosing and associated systemic side effects of drug.

  2. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    PubMed

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  3. Medical imaging dose optimisation from ground up: expert opinion of an international summit.

    PubMed

    Samei, Ehsan; Järvinen, Hannu; Kortesniemi, Mika; Simantirakis, George; Goh, Charles; Wallace, Anthony; Vano, Eliseo; Bejan, Adrian; Rehani, Madan; Vassileva, Jenia

    2018-05-17

    As in any medical intervention, there is either a known or an anticipated benefit to the patient from undergoing a medical imaging procedure. This benefit is generally significant, as demonstrated by the manner in which medical imaging has transformed clinical medicine. At the same time, when it comes to imaging that deploys ionising radiation, there is a potential associated risk from radiation. Radiation risk has been recognised as a key liability in the practice of medical imaging, creating a motivation for radiation dose optimisation. The level of radiation dose and risk in imaging varies but is generally low. Thus, from the epidemiological perspective, this makes the estimation of the precise level of associated risk highly uncertain. However, in spite of the low magnitude and high uncertainty of this risk, its possibility cannot easily be refuted. Therefore, given the moral obligation of healthcare providers, 'first, do no harm,' there is an ethical obligation to mitigate this risk. Precisely how to achieve this goal scientifically and practically within a coherent system has been an open question. To address this need, in 2016, the International Atomic Energy Agency (IAEA) organised a summit to clarify the role of Diagnostic Reference Levels to optimise imaging dose, summarised into an initial report (Järvinen et al 2017 Journal of Medical Imaging 4 031214). Through a consensus building exercise, the summit further concluded that the imaging optimisation goal goes beyond dose alone, and should include image quality as a means to include both the benefit and the safety of the exam. The present, second report details the deliberation of the summit on imaging optimisation.

  4. Important considerations about nursing intelligence and information systems.

    PubMed

    Ballard, E C

    1997-01-01

    This discussion focuses on the importance of nursing intelligence to the organisation, and the nurses' role in gathering and utilising such intelligence. Deliberations with professional colleagues suggest that intelligence can only be utilised fully when the information systems are developed in such a way as to meet the needs of the people who manage and provide nursing care at the consumer level; that is, the activity of nursing itself. If accommodation is made for the recycling of nursing intelligence, there would be a support and furtherance of 'professional' intelligence. Two main issues emerge: how can nurses support the needs of management to optimise intelligence input? how can organisations optimise the contribution of nurses to its information processes and interpretation of intelligence? The expansion of this 'professional' intelligence would promote a generation of constantly reviewed data, offering a quality approach to nursing activities and an organisation's intelligence system.

  5. Orbital optimisation in the perfect pairing hierarchy: applications to full-valence calculations on linear polyacenes

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2018-03-01

    We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.

  6. Contexts, concepts and cognition: principles for the transfer of basic science knowledge.

    PubMed

    Kulasegaram, Kulamakan M; Chaudhary, Zarah; Woods, Nicole; Dore, Kelly; Neville, Alan; Norman, Geoffrey

    2017-02-01

    Transfer of basic science aids novices in the development of clinical reasoning. The literature suggests that although transfer is often difficult for novices, it can be optimised by two complementary strategies: (i) focusing learners on conceptual knowledge of basic science or (ii) exposing learners to multiple contexts in which the basic science concepts may apply. The relative efficacy of each strategy as well as the mechanisms that facilitate transfer are unknown. In two sequential experiments, we compared both strategies and explored mechanistic changes in how learners address new transfer problems. Experiment 1 was a 2 × 3 design in which participants were randomised to learn three physiology concepts with or without emphasis on the conceptual structure of basic science via illustrative analogies and by means of one, two or three contexts during practice (operationalised as organ systems). Transfer of these concepts to explain pathologies in familiar organ systems (near transfer) and unfamiliar organ systems (far transfer) was evaluated during immediate and delayed testing. Experiment 2 examined whether exposure to conceptual analogies and multiple contexts changed how learners classified new problems. Experiment 1 showed that increasing context variation significantly improved far transfer performance but there was no difference between two and three contexts during practice. Similarly, the increased conceptual analogies led to higher performance for far transfer. Both interventions had independent but additive effects on overall performance. Experiment 2 showed that such analogies and context variation caused learners to shift to using structural characteristics to classify new problems even when there was superficial similarity to previous examples. Understanding problems based on conceptual structural characteristics is necessary for successful transfer. Transfer of basic science can be optimised by using multiple strategies that collectively emphasise conceptual structure. This means teaching must focus on conserved basic science knowledge and de-emphasise superficial features. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  7. A new compound arithmetic crossover-based genetic algorithm for constrained optimisation in enterprise systems

    NASA Astrophysics Data System (ADS)

    Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu

    2017-01-01

    In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.

  8. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  9. Reinforcement learning or active inference?

    PubMed

    Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J

    2009-07-29

    This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.

  10. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  11. A management and optimisation model for water supply planning in water deficit areas

    NASA Astrophysics Data System (ADS)

    Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón

    2014-07-01

    The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.

  12. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    PubMed

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  13. Equilibrium I: Principles. Independent Learning Project for Advanced Chemistry (ILPAC). Unit P2.

    ERIC Educational Resources Information Center

    Inner London Education Authority (England).

    This unit on the principles of equilibrium is one of 10 first year units produced by the Independent Learning Project for Advanced Chemistry (ILPAC). The unit consists of two levels. After a treatment of non-mathematical aspects in level one (the idea of a reversible reaction, characteristics of an equilibrium state, the Le Chatelier's principle),…

  14. Optimisation and establishment of diagnostic reference levels in paediatric plain radiography

    NASA Astrophysics Data System (ADS)

    Paulo, Graciano do Nascimento Nobre

    Purpose: This study aimed to propose Diagnostic Reference Levels (DRLs) in paediatric plain radiography and to optimise the most frequent paediatric plain radiography examinations in Portugal following an analysis and evaluation of current practice. Methods and materials: Anthropometric data (weight, patient height and thickness of the irradiated anatomy) was collected from 9,935 patients referred for a radiography procedure to one of the three dedicated paediatric hospitals in Portugal. National DRLs were calculated for the three most frequent X-ray procedures at the three hospitals: chest AP/PA projection; abdomen AP projection; pelvis AP projection. Exposure factors and patient dose were collected prospectively at the clinical sites. In order to analyse the relationship between exposure factors, the use of technical features and dose, experimental tests were made using two anthropomorphic phantoms: a) CIRSTM ATOM model 705; height: 110cm, weight: 19kg and b) Kyoto kagakuTM model PBU-60; height: 165cm, weight: 50kg. After phantom data collection, an objective image analysis was performed by analysing the variation of the mean value of the standard deviation, measured with OsiriX software (Pixmeo, Switzerland). After proposing new exposure criteria, a Visual Grading Characteristic image quality evaluation was performed blindly by four paediatric radiologists, each with a minimum of 10 years of professional experience, using anatomical criteria scoring. Results: DRLs by patient weight groups have been established for the first time. ESAKP75 DRLs for both patient age and weight groups were also obtained and are described in the thesis. Significant dose reduction was achieved through the implementation of an optimisation programme: an average reduction of 41% and 18% on KAPP75 and ESAKP75, respectively for chest plain radiography; an average reduction of 58% and 53% on KAPP75 and ESAKP75, respectively for abdomen plain radiography; and an average reduction of 47% and 48% on KAPP75 and ESAKP75, respectively for pelvis plain radiography. Conclusion: Portuguese DRLs for plain radiography were obtained for paediatric plain radiography (chest AP/PA, abdomen and pelvis). Experimental phantom tests identified adequate plain radiography exposure criteria, validated by objective and subjective image quality analysis. The new exposure criteria were put into practice in one of the paediatric hospitals, by introducing an optimisation programme. The implementation of the optimisation programme allowed a significant dose reduction to paediatric patients, without compromising image quality. (Abstract shortened by ProQuest.).

  15. Experimental design for TBT quantification by isotope dilution SPE-GC-ICP-MS under the European water framework directive.

    PubMed

    Alasonati, Enrica; Fabbri, Barbara; Fettig, Ina; Yardin, Catherine; Del Castillo Busto, Maria Estela; Richter, Janine; Philipp, Rosemarie; Fisicaro, Paola

    2015-03-01

    In Europe the maximum allowable concentration for tributyltin (TBT) compounds in surface water has been regulated by the water framework directive (WFD) and daughter directive that impose a limit of 0.2 ng L(-1) in whole water (as tributyltin cation). Despite the large number of different methodologies for the quantification of organotin species developed in the last two decades, standardised analytical methods at required concentration level do not exist. TBT quantification at picogram level requires efficient and accurate sample preparation and preconcentration, and maximum care to avoid blank contamination. To meet the WFD requirement, a method for the quantification of TBT in mineral water at environmental quality standard (EQS) level, based on solid phase extraction (SPE), was developed and optimised. The quantification was done using species-specific isotope dilution (SSID) followed by gas chromatography (GC) coupled to inductively coupled plasma mass spectrometry (ICP-MS). The analytical process was optimised using a design of experiment (DOE) based on a factorial fractionary plan. The DOE allowed to evaluate 3 qualitative factors (type of stationary phase and eluent, phase mass and eluent volume, pH and analyte ethylation procedure) for a total of 13 levels studied, and a sample volume in the range of 250-1000 mL. Four different models fitting the results were defined and evaluated with statistic tools: one of them was selected and optimised to find the best procedural conditions. C18 phase was found to be the best stationary phase for SPE experiments. The 4 solvents tested with C18, the pH and ethylation conditions, the mass of the phases, the volume of the eluents and the sample volume can all be optimal, but depending on their respective combination. For that reason, the equation of the model conceived in this work is a useful decisional tool for the planning of experiments, because it can be applied to predict the TBT mass fraction recovery when the experimental conditions are drawn. This work shows that SPE is a convenient technique for TBT pre-concentration at pico-trace levels and a robust approach: in fact (i) number of different experimental conditions led to satisfactory results and (ii) the participation of two institutes to the experimental work did not impact the developed model. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    NASA Astrophysics Data System (ADS)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  17. Computational aero-acoustics for fan duct propagation and radiation. Current status and application to turbofan liner optimisation

    NASA Astrophysics Data System (ADS)

    Astley, R. J.; Sugimoto, R.; Mustafi, P.

    2011-08-01

    Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.

  18. Global Topology Optimisation

    DTIC Science & Technology

    2016-10-31

    statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally

  19. A simulation and optimisation procedure to model daily suppression resource transfers during a fire season in Colorado

    Treesearch

    Yu Wei; Erin J. Belval; Matthew P. Thompson; Dave E. Calkin; Crystal S. Stonesifer

    2016-01-01

    Sharing fire engines and crews between fire suppression dispatch zones may help improve the utilisation of fire suppression resources. Using the Resource Ordering and Status System, the Predictive Services’ Fire Potential Outlooks and the Rocky Mountain Region Preparedness Levels from 2010 to 2013, we tested a simulation and optimisation procedure to transfer crews and...

  20. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  1. Discontinuous permeable adsorptive barrier design and cost analysis: a methodological approach to optimisation.

    PubMed

    Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino

    2017-09-19

    The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.

  2. Escalated convergent artificial bee colony

    NASA Astrophysics Data System (ADS)

    Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu

    2016-03-01

    Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.

  3. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation

    PubMed Central

    Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D

    2017-01-01

    Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852

  4. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    NASA Astrophysics Data System (ADS)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  5. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  6. Ant colony optimisation-direct cover: a hybrid ant colony direct cover technique for multi-level synthesis of multiple-valued logic functions

    NASA Astrophysics Data System (ADS)

    Abd-El-Barr, Mostafa

    2010-12-01

    The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.

  7. A Tale of Two Community Networks Program Centers: Operationalizing and Assessing CBPR Principles and Evaluating Partnership Outcomes.

    PubMed

    Arroyo-Johnson, Cassandra; Allen, Michele L; Colditz, Graham A; Hurtado, G Ali; Davey, Cynthia S; Sanders Thompson, Vetta L; Drake, Bettina F; Svetaz, Maria Veronica; Rosas-Lee, Maira; Goodman, Melody S

    2015-01-01

    Community Networks Program (CNP) centers are required to use a community-based participatory research (CBPR) approach within their specific priority communities. Not all communities are the same and unique contextual factors and collaborators' priorities shape each CBPR partnership. There are also established CBPR and community engagement (CE) principles shown to lead to quality CBPR in any community. However, operationalizing and assessing CBPR principles and partnership outcomes to understand the conditions and processes in CBPR that lead to achieving program and project level goals is relatively new in the science of CBPR. We sought to describe the development of surveys on adherence to and implementation of CBPR/CE principles at two CNP centers and examine commonalities and differences in program-versus project-level CBPR evaluation. A case study about the development and application of CBPR/CE principles for the Missouri CNP, Program for the Elimination of Cancer Disparities, and Minnesota CNP, Padres Informados/Jovenes Preparados, surveys was conducted to compare project versus program operationalization of principles. Survey participant demographics were provided by CNP. Specific domains found in CBPR/CE principles were identified and organized under an existing framework to establish a common ground. Operational definitions and the number of survey items were provided for each domain by CNP. There are distinct differences in operational definitions of CBPR/CE principles at the program and project levels of evaluation. However, commonalities support further research to develop standards for CBPR evaluation across partnerships and at the program and project levels.

  8. Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity

    PubMed Central

    Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates

    2013-01-01

    A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254

  9. The effectiveness of crisis resource management and team debriefing in resuscitation education of nursing students: A randomised controlled trial.

    PubMed

    Coppens, Imgard; Verhaeghe, Sofie; Van Hecke, Ann; Beeckman, Dimitri

    2018-01-01

    The aim of this study was to investigate (i) whether integrating a course on crisis resource management principles and team debriefings in simulation training, increases self-efficacy, team efficacy and technical skills of nursing students in resuscitation settings and (ii) which phases contribute the most to these outcomes. Crisis resource management principles have been introduced in health care to optimise teamwork. Simulation training offers patient safe training opportunities. There is evidence that simulation training increases self-efficacy and team efficacy but the contribution of the different phases like crisis resource management principles, simulation training and debriefing on self-efficacy, team efficacy and technical skills is not clear. Randomised controlled trial in a convenience sample (n = 116) in Belgium. Data were collected between February 2015-April 2015. Participants in the intervention group (n = 60) completed a course on crisis resource management principles, followed by a simulation training session, a team debriefing and a second simulation training session. Participants in the control group (n = 56) only completed two simulation training sessions. The outcomes self-efficacy, team efficacy and technical skills were assessed after each simulation training. An ancillary analysis of the learning effect was conducted. The intervention group increased on self-efficacy (2.13%, p = .02) and team efficacy (9.92%, p < .001); the control group only increased significantly on team efficacy (4.5%, p = .001). The intervention group scored significantly higher on team efficacy (8.49%, p < .001) compared to the control group. Combining crisis resource management principles and team debriefings in simulation training increases self-efficacy and team efficacy. The debriefing phase contributes the most to these effects. By partnering with healthcare settings, it becomes possible to offer interdisciplinary simulation training that can increase patient safety. © 2017 John Wiley & Sons Ltd.

  10. Free energy, precision and learning: the role of cholinergic neuromodulation

    PubMed Central

    Moran, Rosalyn J.; Campo, Pablo; Symmonds, Mkael; Stephan, Klaas E.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Acetylcholine (ACh) is a neuromodulatory transmitter implicated in perception and learning under uncertainty. This study combined computational simulations and pharmaco-electroencephalography in humans, to test a formulation of perceptual inference based upon the free energy principle. This formulation suggests that acetylcholine enhances the precision of bottom-up synaptic transmission in cortical hierarchies by optimising the gain of supragranular pyramidal cells. Simulations of a mismatch negativity paradigm predicted a rapid trial-by-trial suppression of evoked sensory prediction error (PE) responses that is attenuated by cholinergic neuromodulation. We confirmed this prediction empirically with a placebo-controlled study of cholinesterase inhibition. Furthermore – using dynamic causal modelling – we found that drug-induced differences in PE responses could be explained by gain modulation in supragranular pyramidal cells in primary sensory cortex. This suggests that acetylcholine adaptively enhances sensory precision by boosting bottom-up signalling when stimuli are predictable, enabling the brain to respond optimally under different levels of environmental uncertainty. PMID:23658161

  11. Design and Optimisation of Electrostatic Precipitator for Diesel Exhaust

    NASA Astrophysics Data System (ADS)

    Srinivaas, A.; Sathian, Samanyu; Ramesh, Arjun

    2018-02-01

    The principle of an industrially used emission reduction technique is employed in automotive diesel exhaust to reduce the diesel particulate emission. As the Emission regulation are becoming more stringent legislations have been formulated, due to the hazardous increase in the air quality index in major cities. Initially electrostatic precipitation principle and working was investigated. The High voltage requirement in an Electrostatic precipitator is obtained by designing an appropriate circuit in MATLAB -SIMULINK. Mechanical structural design of the new model after treatment device for the specific diesel exhaust was done. Fluid flow analysis of the ESP model was carried out using ANSYS CFX for optimized fluid with a reduced back pressure. Design reconsideration was done in accordance with fluid flow analysis. Accordingly, a new design is developed by considering diesel particulate filter and catalytic converter design to ESP model.

  12. Ecotechnology: basis of a new immission concept in water pollution control.

    PubMed

    Benndorf, J

    2005-01-01

    Beyond the traditional load reduction also an ecosystem-internal mechanism can be used to minimise the effects of water pollution. The control of the internal mechanisms is achieved through the optimisation of the ecosystem structure. This ecotechnology principle is based on the idea to reduce as much as possible the gap between the current (suboptimal) structural status and the optimum structure by intentional manipulations. The spectrum of such manipulations is very broad. A few examples are demonstrated. They comprise physical (e.g. stream morphology), chemical (e.g. enhancing the redox potential at the sediment-water interface) and biological (e.g. enhancing stocks of predatory fishes) control measures. It can be supposed that a new immission concept including the ecotechnology principle could be much more adequate to the demand of modern water pollution control than the traditional emission and imission concepts.

  13. Diet and diabetes.

    PubMed

    Deed, Gary; Barlow, John; Kawol, Dev; Kilov, Gary; Sharma, Anita; Hwa, Liew Yu

    2015-05-01

    Guidelines for the prevention and management of type 2 diabetes mellitus (T2DM) reinforce lifestyle management, yet advice to guide general practitioners on principles around dietary choices is needed. This article provides current evidence regarding the differing diets in diabetes prevention and management once T2DM arises, including the role in management of complications such as hypoglycaemia. Diets should incorporate weight maintenance or loss, while complementing changes in physical activity to optimise the metabolic effects of dietary advice. Using a structured, team-care approach supports pragmatic and sustainable individualised plans, while incorporating current evidence-based dietary approaches.

  14. Cross-platform evaluation of commercial real-time SYBR green RT-PCR kits for sensitive and rapid detection of European bat Lyssavirus type 1.

    PubMed

    Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence

    2015-01-01

    This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R (2) values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes.

  15. Cross-Platform Evaluation of Commercial Real-Time SYBR Green RT-PCR Kits for Sensitive and Rapid Detection of European Bat Lyssavirus Type 1

    PubMed Central

    Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence

    2015-01-01

    This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R 2 values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes. PMID:25785274

  16. A case study of organisational Cultural Competence in mental healthcare.

    PubMed

    Adamson, Jean; Warfa, Nasir; Bhui, Kamaldeep

    2011-09-15

    Ensuring Cultural Competence (CC) in health care is a mechanism to deliver culturally appropriate care and optimise recovery. In policies that promote cultural competence, the training of mental health practitioners is a key component of a culturally competent organisation. This study examines staff perceptions of CC and the integration of CC principles in a mental healthcare organisation. The purpose is to show interactions between organisational and individual processes that help or hinder recovery orientated services. We carried out a case study of a large mental health provider using a cultural competence needs analysis. We used structured and semi-structured questionnaires to explore the perceptions of healthcare professionals located in one of the most ethnically and culturally diverse areas of England, its capital city London. There was some evidence that clinical staff were engaged in culturally competent activities. We found a growing awareness of cultural competence amongst staff in general, and many had attended training. However, strategic plans and procedures that promote cultural competence tended to not be well communicated to all frontline staff; whilst there was little understanding at corporate level of culturally competent clinical practices. The provider organisation had commenced a targeted recruitment campaign to recruit staff from under-represented ethnic groups and it developed collaborative working patterns with service users. There is evidence to show tentative steps towards building cultural competence in the organisation. However, further work is needed to embed cultural competence principles and practices at all levels of the organisation, for example, by introducing monitoring systems that enable organisations to benchmark their performance as a culturally capable organisation.

  17. A case study of organisational cultural competence in mental healthcare

    PubMed Central

    2011-01-01

    Background Ensuring Cultural Competence (CC) in health care is a mechanism to deliver culturally appropriate care and optimise recovery. In policies that promote cultural competence, the training of mental health practitioners is a key component of a culturally competent organisation. This study examines staff perceptions of CC and the integration of CC principles in a mental healthcare organisation. The purpose is to show interactions between organisational and individual processes that help or hinder recovery orientated services. Methods We carried out a case study of a large mental health provider using a cultural competence needs analysis. We used structured and semi-structured questionnaires to explore the perceptions of healthcare professionals located in one of the most ethnically and culturally diverse areas of England, its capital city London. Results There was some evidence that clinical staff were engaged in culturally competent activities. We found a growing awareness of cultural competence amongst staff in general, and many had attended training. However, strategic plans and procedures that promote cultural competence tended to not be well communicated to all frontline staff; whilst there was little understanding at corporate level of culturally competent clinical practices. The provider organisation had commenced a targeted recruitment campaign to recruit staff from under-represented ethnic groups and it developed collaborative working patterns with service users. Conclusion There is evidence to show tentative steps towards building cultural competence in the organisation. However, further work is needed to embed cultural competence principles and practices at all levels of the organisation, for example, by introducing monitoring systems that enable organisations to benchmark their performance as a culturally capable organisation. PMID:21920044

  18. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    NASA Astrophysics Data System (ADS)

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  19. Improving target coverage and organ-at-risk sparing in intensity-modulated radiotherapy for cervical oesophageal cancer using a simple optimisation method.

    PubMed

    Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi

    2015-01-01

    To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.

  20. Biometrical evaluation of the performance of the revised OECD Test Guideline 402 for assessing acute dermal toxicity.

    PubMed

    Mielke, H; Strickland, J; Jacobs, M N; Mehta, J M

    2017-10-01

    A comprehensive biometrical assessment was conducted to compare the performance of multiple test designs for acute dermal systemic toxicity to support the animal welfare update to the original OECD Test Guideline (TG) 402 for acute dermal toxicity. The test designs evaluated included: (1) two, three, or five animals per dose group (2) evident toxicity or lethality endpoints and (3) absence or presence of a one-animal sighting study. The revision of TG 402 respected the 3R principles (replace, reduce, refine) of animal testing. The results demonstrate that the TG 402 test design can be optimised with reduced animal numbers per test group, such that a scenario of two animals per group following a sighting study at a starting dose of 200 mg/kg bw (unless further information is available to better define the starting dose) would provide a classification which in most cases is conservative, without compromising both the statistical ability of the study to assess dermal toxicity, or the relevant classification outcome. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Paediatric CT protocol optimisation: a design of experiments to support the modelling and optimisation process.

    PubMed

    Rani, K; Jahnen, A; Noel, A; Wolf, D

    2015-07-01

    In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Losses of nutrients and anti-nutrients in red and white sorghum cultivars after decorticating in optimised conditions.

    PubMed

    Galán, María Gimena; Llopart, Emilce Elina; Drago, Silvina Rosa

    2018-05-01

    The aims were to optimise pearling process of red and white sorghum by assessing the effects of pearling time and grain moisture on endosperm yield and flour ash content and to assess nutrient and anti-nutrient losses produced by pearling different cultivars in optimised conditions. Both variables significantly affected both responses. Losses of ashes (58%), proteins (9.5%), lipids (54.5%), Na (37%), Mg (48.5%) and phenolic compounds (43%) were similar among red and white hybrids. However, losses of P (30% vs. 51%), phytic acid (47% vs. 66%), Fe (22% vs. 55%), Zn (32% vs. 62%), Ca (60% vs. 66%), K (46% vs. 61%) and Cu (51% vs. 71%) were lower for red than white sorghum due to different degree of extraction and distribution of components in the grain. Optimised pearling conditions were extrapolated to other hybrids, indicating these criteria could be applied at industrial level to obtain refined flours with proper quality and good endosperm yields.

  3. Optimisation of synergistic biomass-degrading enzyme systems for efficient rice straw hydrolysis using an experimental mixture design.

    PubMed

    Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat

    2012-09-01

    Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Vertical transportation systems embedded on shuffled frog leaping algorithm for manufacturing optimisation problems in industries.

    PubMed

    Aungkulanon, Pasura; Luangpaiboon, Pongchanun

    2016-01-01

    Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.

  5. LFRic: Building a new Unified Model

    NASA Astrophysics Data System (ADS)

    Melvin, Thomas; Mullerworth, Steve; Ford, Rupert; Maynard, Chris; Hobson, Mike

    2017-04-01

    The LFRic project, named for Lewis Fry Richardson, aims to develop a replacement for the Met Office Unified Model in order to meet the challenges which will be presented by the next generation of exascale supercomputers. This project, a collaboration between the Met Office, STFC Daresbury and the University of Manchester, builds on the earlier GungHo project to redesign the dynamical core, in partnership with NERC. The new atmospheric model aims to retain the performance of the current ENDGame dynamical core and associated subgrid physics, while also enabling a far greater scalability and flexibility to accommodate future supercomputer architectures. Design of the model revolves around a principle of a 'separation of concerns', whereby the natural science aspects of the code can be developed without worrying about the underlying architecture, while machine dependent optimisations can be carried out at a high level. These principles are put into practice through the development of an autogenerated Parallel Systems software layer (known as the PSy layer) using a domain-specific compiler called PSyclone. The prototype model includes a re-write of the dynamical core using a mixed finite element method, in which different function spaces are used to represent the various fields. It is able to run in parallel with MPI and OpenMP and has been tested on over 200,000 cores. In this talk an overview of the both the natural science and computational science implementations of the model will be presented.

  6. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation.

    PubMed

    Zografos, K; Pimenta, F; Alves, M A; Oliveira, M S N

    2016-07-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field.

  7. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation

    PubMed Central

    Zografos, K.; Oliveira, M. S. N.

    2016-01-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523

  8. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  9. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  10. A Tale of Two Community Networks Program Centers: Operationalizing and Assessing CBPR Principles and Evaluating Partnership Outcomes

    PubMed Central

    Arroyo-Johnson, Cassandra; Allen, Michele L.; Colditz, Graham A.; Hurtado, G. Ali; Davey, Cynthia S.; Sanders Thompson, Vetta L.; Drake, Bettina F.; Svetaz, Maria Veronica; Rosas-Lee, Maira; Goodman, Melody S.

    2016-01-01

    Background Community Networks Program (CNP) centers are required to use a community-based participatory research (CBPR) approach within their specific priority communities. Not all communities are the same and unique contextual factors and collaborators’ priorities shape each CBPR partnership. There are also established CBPR and community engagement (CE) principles shown to lead to quality CBPR in any community. However, operationalizing and assessing CBPR principles and partnership outcomes to understand the conditions and processes in CBPR that lead to achieving program and project level goals is relatively new in the science of CBPR. Objectives We sought to describe the development of surveys on adherence to and implementation of CBPR/CE principles at two CNP centers and examine commonalities and differences in program- versus project-level CBPR evaluation. Methods A case study about the development and application of CBPR/CE principles for the Missouri CNP, Program for the Elimination of Cancer Disparities, and Minnesota CNP, Padres Informados/Jovenes Preparados, surveys was conducted to compare project versus program operationalization of principles. Survey participant demographics were provided by CNP. Specific domains found in CBPR/CE principles were identified and organized under an existing framework to establish a common ground. Operational definitions and the number of survey items were provided for each domain by CNP. Conclusion There are distinct differences in operational definitions of CBPR/CE principles at the program and project levels of evaluation. However, commonalities support further research to develop standards for CBPR evaluation across partnerships and at the program and project levels. PMID:26213405

  11. Medication management policy, practice and research in Australian residential aged care: Current and future directions.

    PubMed

    Sluggett, Janet K; Ilomäki, Jenni; Seaman, Karla L; Corlis, Megan; Bell, J Simon

    2017-02-01

    Eight percent of Australians aged 65 years and over receive residential aged care each year. Residents are increasingly older, frailer and have complex care needs on entry to residential aged care. Up to 63% of Australian residents of aged care facilities take nine or more medications regularly. Together, these factors place residents at high risk of adverse drug events. This paper reviews medication-related policies, practices and research in Australian residential aged care. Complex processes underpin prescribing, supply and administration of medications in aged care facilities. A broad range of policies and resources are available to assist health professionals, aged care facilities and residents to optimise medication management. These include national guiding principles, a standardised national medication chart, clinical medication reviews and facility accreditation standards. Recent Australian interventions have improved medication use in residential aged care facilities. Generating evidence for prescribing and deprescribing that is specific to residential aged care, health workforce reform, medication-related quality indicators and inter-professional education in aged care are important steps toward optimising medication use in this setting. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. rPM6 parameters for phosphorous and sulphur-containing open-shell molecules

    NASA Astrophysics Data System (ADS)

    Saito, Toru; Takano, Yu

    2018-03-01

    In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.

  13. A simple method for low-contrast detectability, image quality and dose optimisation with CT iterative reconstruction algorithms and model observers.

    PubMed

    Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano

    2017-01-01

    The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.

  14. On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Gonzalo, J.; Domínguez, D.; López, D.

    2014-12-01

    From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.

  15. Optimising motivation and reducing burnout for radiation oncology trainees: A framework using self-determination theory.

    PubMed

    Poulsen, Michael; Poulsen, Anne A

    2018-05-02

    Radiation oncology trainees in Australia and New Zealand have relatively high levels of emotional exhaustion and depersonalisation which are core components of burnout. The stresses of a demanding clinical load, studying for exams as well as family commitments are all contributing factors. Self-Deter mination Theory (SDT) provides a framework for optimising motivation which may be intrinsic or extrinsic. The three core components of SDT are competence, relatedness and autonomy. These factors should be addressed at a college level, Institutional and a personal level if the best outcomes are to be achieved. An environment that supports the individual's experience of competency, relatedness and autonomy will foster motivation and work engagement which in turn will improve performance, energy, resilience and creativity and reduce levels of burnout. © 2018 The Royal Australian and New Zealand College of Radiologists.

  16. Utilization of Historic Information in an Optimisation Task

    NASA Technical Reports Server (NTRS)

    Boesser, T.

    1984-01-01

    One of the basic components of a discrete model of motor behavior and decision making, which describes tracking and supervisory control in unitary terms, is assumed to be a filtering mechanism which is tied to the representational principles of human memory for time-series information. In a series of experiments subjects used the time-series information with certain significant limitations: there is a range-effect; asymmetric distributions seem to be recognized, but it does not seem to be possible to optimize performance based on skewed distributions. Thus there is a transformation of the displayed data between the perceptual system and representation in memory involving a loss of information. This rules out a number of representational principles for time-series information in memory and fits very well into the framework of a comprehensive discrete model for control of complex systems, modelling continuous control (tracking), discrete responses, supervisory behavior and learning.

  17. Proteomic Analysis of Anti-Cancerous Scopularide Production by a Marine Microascus brevicaulis Strain and Its UV Mutant.

    PubMed

    Kramer, Annemarie; Beck, Hans Christian; Kumar, Abhishek; Kristensen, Lars Peter; Imhoff, Johannes F; Labes, Antje

    2015-01-01

    The marine fungus Microascus brevicaulis strain LF580 is a non-model secondary metabolite producer with high yields of the two secondary metabolites scopularides A and B, which exhibit distinct activities against tumour cell lines. A mutant strain was obtained using UV mutagenesis, showing faster growth and differences in pellet formation besides higher production levels. Here, we show the first proteome study of a marine fungus. Comparative proteomics were applied to gain deeper understanding of the regulation of production and of the physiology of the wild type strain and its mutant. For this purpose, an optimised protein extraction protocol was established. In total, 4759 proteins were identified. The central metabolic pathway of strain LF580 was mapped using the KEGG pathway analysis and GO annotation. Employing iTRAQ labelling, 318 proteins were shown to be significantly regulated in the mutant strain: 189 were down- and 129 upregulated. Proteomics are a powerful tool for the understanding of regulatory aspects: The differences on proteome level could be attributed to limited nutrient availability in the wild type strain due to a strong pellet formation. This information can be applied for optimisation on strain and process level. The linkage between nutrient limitation and pellet formation in the non-model fungus M. brevicaulis is in consensus with the knowledge on model organisms like Aspergillus niger and Penicillium chrysogenum.

  18. Proteomic Analysis of Anti-Cancerous Scopularide Production by a Marine Microascus brevicaulis Strain and Its UV Mutant

    PubMed Central

    Kramer, Annemarie; Beck, Hans Christian; Kumar, Abhishek; Kristensen, Lars Peter; Imhoff, Johannes F.; Labes, Antje

    2015-01-01

    The marine fungus Microascus brevicaulis strain LF580 is a non-model secondary metabolite producer with high yields of the two secondary metabolites scopularides A and B, which exhibit distinct activities against tumour cell lines. A mutant strain was obtained using UV mutagenesis, showing faster growth and differences in pellet formation besides higher production levels. Here, we show the first proteome study of a marine fungus. Comparative proteomics were applied to gain deeper understanding of the regulation of production and of the physiology of the wild type strain and its mutant. For this purpose, an optimised protein extraction protocol was established. In total, 4759 proteins were identified. The central metabolic pathway of strain LF580 was mapped using the KEGG pathway analysis and GO annotation. Employing iTRAQ labelling, 318 proteins were shown to be significantly regulated in the mutant strain: 189 were down- and 129 upregulated. Proteomics are a powerful tool for the understanding of regulatory aspects: The differences on proteome level could be attributed to limited nutrient availability in the wild type strain due to a strong pellet formation. This information can be applied for optimisation on strain and process level. The linkage between nutrient limitation and pellet formation in the non-model fungus M. brevicaulis is in consensus with the knowledge on model organisms like Aspergillus niger and Penicillium chrysogenum. PMID:26460745

  19. Assessment and Use of Optical Oxygen Sensors as Tools to Assist in Optimal Product Component Selection for the Development of Packs of Ready-to-Eat Mixed Salads and for the Non-Destructive Monitoring of in-Pack Oxygen Levels Using Chilled Storage.

    PubMed

    Hempel, Andreas W; O'Sullivan, Maurice G; Papkovsky, Dmitri B; Kerry, Joseph P

    2013-05-22

    Optical oxygen sensors were used to ascertain the level of oxygen consumed by individual salad leaves for optimised packaging of ready-to-eat (RTE) Italian salad mixes during refrigerated storage. Seven commonly found leaves in Italian salad mixes were individually assessed for oxygen utilisation in packs. Each leaf showed varying levels of respiration throughout storage. Using the information obtained, an experimental salad mix was formulated (termed Mix 3) which consisted of the four slowest respiring salad leaves-Escarole, Frisee, Red Batavia, Lollo Rosso. Mix 3 was then compared against two commercially available Italian salads; Mix 1 (Escarole, Frisee, Radicchio, Lollo Rosso) and Mix 2 (Cos, Frisee, Radicchio, Lollo Rosso). Optical sensors were used to non-destructively monitor oxygen usage in all mixes throughout storage. In addition to oxygen consumption, all three salad mixes were quality assessed in terms of microbial load and sensorial acceptability. In conclusion, Mix 3 was found to consume the least amount of oxygen over time, had the lowest microbial load and was most sensorially preferred ( p < 0.05) in terms of overall appearance and acceptability. This study clearly shows the potential that oxygen sensors possess in terms of assisting in the optimised development of commercial RTE salad products.

  20. Achieving quantum precision limit in adaptive qubit state tomography

    NASA Astrophysics Data System (ADS)

    Hou, Zhibo; Zhu, Huangjun; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2016-02-01

    The precision limit in quantum state tomography is of great interest not only to practical applications but also to foundational studies. However, little is known about this subject in the multiparameter setting even theoretically due to the subtle information trade-off among incompatible observables. In the case of a qubit, the theoretic precision limit was determined by Hayashi as well as Gill and Massar, but attaining the precision limit in experiments has remained a challenging task. Here we report the first experiment that achieves this precision limit in adaptive quantum state tomography on optical polarisation qubits. The two-step adaptive strategy used in our experiment is very easy to implement in practice. Yet it is surprisingly powerful in optimising most figures of merit of practical interest. Our study may have significant implications for multiparameter quantum estimation problems, such as quantum metrology. Meanwhile, it may promote our understanding about the complementarity principle and uncertainty relations from the information theoretic perspective.

  1. The evolution of acute burn care - retiring the split skin graft.

    PubMed

    Greenwood, J E

    2017-07-01

    The skin graft was born in 1869 and since then, surgeons have been using split skin grafts for wound repair. Nevertheless, this asset fails the big burn patient, who deserves an elastic, mobile and robust outcome but who receives the poorest possible outcome based on donor site paucity. Negating the need for the skin graft requires an autologous composite cultured skin and a material capable of temporising the burn wound for four weeks until the composite is produced. A novel, biodegradable polyurethane chemistry has been used to create two such products. This paper describes the design, production, optimisation and evaluation of several iterations of these products. The evaluation has occurred in a variety of models, both in vitro and in vivo, employing Hunterian scientific principles, and embracing Hunter's love and appreciation of comparative anatomy. The process has culminated in significant human experience in complex wounds and extensive burn injury. Used serially, the products offer robust and elastic healing in deep burns of any size within 6 weeks of injury.

  2. Knowledge for Healthcare: the future of health librarianship.

    PubMed

    Bryant, Sue Lacey; Stewart, David; Goswami, Louise

    2015-09-01

    Many people are still not receiving the right care. It is imperative for health care librarians to come together around a common vision to achieve Knowledge for Healthcare so that the right knowledge and evidence is used at the right time in the right place. The authors describe five workstreams within a modernisation programme: Service Transformation, Workforce Planning and Development, Quality and Impact, Resource Discovery and Optimising Investment. Communications, engagement and partnership working are central to success. The development framework sets out principles on which to base decisions, and design criteria for transforming services. © 2015 Health Libraries Group.

  3. Biofabrication: an overview of the approaches used for printing of living cells.

    PubMed

    Ferris, Cameron J; Gilmore, Kerry G; Wallace, Gordon G; In het Panhuis, Marc

    2013-05-01

    The development of cell printing is vital for establishing biofabrication approaches as clinically relevant tools. Achieving this requires bio-inks which must not only be easily printable, but also allow controllable and reproducible printing of cells. This review outlines the general principles and current progress and compares the advantages and challenges for the most widely used biofabrication techniques for printing cells: extrusion, laser, microvalve, inkjet and tissue fragment printing. It is expected that significant advances in cell printing will result from synergistic combinations of these techniques and lead to optimised resolution, throughput and the overall complexity of printed constructs.

  4. On the characteristics of optimal transfers

    NASA Astrophysics Data System (ADS)

    Iorfida, Elisabetta

    In the past 50 years the scientists have been developing and analysing methods and new algorithms that optimise an interplanetary trajectory according to one or more objectives. Within this field, in 1963 Lawden derived, from Pontryagin's minimum principle, the so-called `primer vector theory'. The main goal of this thesis is to develop a theoretical understanding of Lawden's theory, getting an insight into the optimality of a trajectory when mid-course corrections need to be applied. The novelty of the research is represented by a different approach to the primer vector theory, which simplifies the structure of the problem.

  5. Modelling of auctioning mechanism for solar photovoltaic capacity

    NASA Astrophysics Data System (ADS)

    Poullikkas, Andreas

    2016-10-01

    In this work, a modified optimisation model for the integration of renewable energy sources for power-generation (RES-E) technologies in power-generation systems on a unit commitment basis is developed. The purpose of the modified optimisation procedure is to account for RES-E capacity auctions for different solar photovoltaic (PV) capacity electricity prices. The optimisation model developed uses a genetic algorithm (GA) technique for the calculation of the required RES-E levy (or green tax) in the electricity bills. Also, the procedure enables the estimation of the level of the adequate (or eligible) feed-in-tariff to be offered to future RES-E systems, which do not participate in the capacity auctioning procedure. In order to demonstrate the applicability of the optimisation procedure developed the case of PV capacity auctioning for commercial systems is examined. The results indicated that the required green tax, in order to promote the use of RES-E technologies, which is charged to the electricity customers through their electricity bills, is reduced with the reduction in the final auctioning price. This has a significant effect related to the reduction of electricity bills.

  6. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis

    PubMed Central

    Waterfall, Christy M.; Cobb, Benjamin D.

    2001-01-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702

  7. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.

    PubMed

    Waterfall, C M; Cobb, B D

    2001-12-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.

  8. Stakeholder involvement in agri-environmental policy making--learning from a local- and a state-level approach in Germany.

    PubMed

    Prager, Katrin; Freese, Jan

    2009-02-01

    Recent European regulations for rural development emphasise the requirement to involve stakeholder groups and other appropriate bodies in the policy-making process. This paper presents two cases involving stakeholder participation in agri-environmental development and policy making, targeted at different policy-making levels. One study was undertaken in Lower Saxony where a local partnership developed and tested an agri-environmental prescription, which was later included in the state's menu of agri-environmental schemes. In Sachsen-Anhalt, state-facilitated stakeholder workshops including a mathematical model were used to optimise the programme planning and budget allocation at the state level. Both studies aimed at improving the acceptance of agri-environmental schemes. The authors gauge the effectiveness of the two approaches and discuss what lessons can be learned. The experience suggests that the approaches can complement one another and could also be applied to rural policy making.

  9. Multi-objective thermodynamic optimisation of supercritical CO2 Brayton cycles integrated with solar central receivers

    NASA Astrophysics Data System (ADS)

    Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes

    2018-01-01

    In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.

  10. A generic methodology for the optimisation of sewer systems using stochastic programming and self-optimizing control.

    PubMed

    Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan

    2015-05-15

    The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Dual leadership in a hospital practice.

    PubMed

    Thude, Bettina Ravnborg; Thomsen, Svend Erik; Stenager, Egon; Hollnagel, Erik

    2017-02-06

    Purpose Despite the practice of dual leadership in many organizations, there is relatively little research on the topic. Dual leadership means two leaders share the leadership task and are held jointly accountable for the results of the unit. To better understand how dual leadership works, this study aims to analyse three different dual leadership pairs at a Danish hospital. Furthermore, this study develops a tool to characterize dual leadership teams from each other. Design/methodology/approach This is a qualitative study using semi-structured interviews. Six leaders were interviewed to clarify how dual leadership works in a hospital context. All interviews were transcribed and coded. During coding, focus was on the nine principles found in the literature and another principle was found by looking at the themes that were generic for all six interviews. Findings Results indicate that power balance, personal relations and decision processes are important factors for creating efficient dual leaderships. The study develops a categorizing tool to use for further research or for organizations, to describe and analyse dual leaderships. Originality/value The study describes dual leadership in the hospital context and develops a categorizing tool for being able to distinguish dual leadership teams from each other. It is important to reveal if there are any indicators that can be used for optimising dual leadership teams in the health-care sector and in other organisations.

  12. Utilisation of biomass gasification by-products for onsite energy production.

    PubMed

    Vakalis, S; Sotiropoulos, A; Moustakas, K; Malamis, D; Baratieri, M

    2016-06-01

    Small scale biomass gasification is a sector with growth and increasing applications owing to the environmental goals of the European Union and the incentivised policies of most European countries. This study addresses two aspects, which are at the centre of attention concerning the operation and development of small scale gasifiers; reuse of waste and increase of energy efficiency. Several authors have denoted that the low electrical efficiency of these systems is the main barrier for further commercial development. In addition, gasification has several by-products that have no further use and are discarded as waste. In the framework of this manuscript, a secondary reactor is introduced and modelled. The main operating principle is the utilisation of char and flue gases for further energy production. These by-products are reformed into secondary producer gas by means of a secondary reactor. In addition, a set of heat exchangers capture the waste heat and optimise the process. This case study is modelled in a MATLAB-Cantera environment. The model is non-stoichiometric and applies the Gibbs minimisation principle. The simulations show that some of the thermal energy is depleted during the process owing to the preheating of flue gases. Nonetheless, the addition of a secondary reactor results in an increase of the electrical power production efficiency and the combined heat and power (CHP) efficiency. © The Author(s) 2016.

  13. Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test

    NASA Astrophysics Data System (ADS)

    Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng

    2017-04-01

    Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.

  14. Demonstrating the suitability of genetic algorithms for driving microbial ecosystems in desirable directions.

    PubMed

    Vandecasteele, Frederik P J; Hess, Thomas F; Crawford, Ronald L

    2007-07-01

    The functioning of natural microbial ecosystems is determined by biotic interactions, which are in turn influenced by abiotic environmental conditions. Direct experimental manipulation of such conditions can be used to purposefully drive ecosystems toward exhibiting desirable functions. When a set of environmental conditions can be manipulated to be present at a discrete number of levels, finding the right combination of conditions to obtain the optimal desired effect becomes a typical combinatorial optimisation problem. Genetic algorithms are a class of robust and flexible search and optimisation techniques from the field of computer science that may be very suitable for such a task. To verify this idea, datasets containing growth levels of the total microbial community of four different natural microbial ecosystems in response to all possible combinations of a set of five chemical supplements were obtained. Subsequently, the ability of a genetic algorithm to search this parameter space for combinations of supplements driving the microbial communities to high levels of growth was compared to that of a random search, a local search, and a hill-climbing algorithm, three intuitive alternative optimisation approaches. The results indicate that a genetic algorithm is very suitable for driving microbial ecosystems in desirable directions, which opens opportunities for both fundamental ecological research and industrial applications.

  15. Reactive power planning under high penetration of wind energy using Benders decomposition

    DOE PAGES

    Xu, Yan; Wei, Yanli; Fang, Xin; ...

    2015-11-05

    This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less

  16. The Energy-Efficient Quarry: Towards improved understanding and optimisation of energy use and minimisation of CO2 generation in the aggregates industry.

    NASA Astrophysics Data System (ADS)

    Hill, Ian; White, Toby; Owen, Sarah

    2014-05-01

    Extraction and processing of rock materials to produce aggregates is carried out at some 20,000 quarries across the EU. All stages of the processing and transport of hard and dense materials inevitably consume high levels of energy and have consequent significant carbon footprints. The FP7 project "the Energy Efficient Quarry" (EE-Quarry) has been addressing this problem and has devised strategies, supported by modelling software, to assist the quarrying industry to assess and optimise its energy use, and to minimise its carbon footprint. Aggregate quarries across Europe vary enormously in the scale of the quarrying operations, the nature of the worked mineral, and the processing to produce a final market product. Nevertheless most quarries involve most or all of a series of essential stages; deposit assessment, drilling and blasting, loading and hauling, and crushing and screening. The process of determining the energy-efficiency of each stage is complex, but is broadly understood in principle and there are numerous sources of information and guidance available in the literature and on-line. More complex still is the interaction between each of these stages. For example, using a little more energy in blasting to increase fragmentation may save much greater energy in later crushing and screening, but also generate more fines material which is discarded as waste and the embedded energy in this material is lost. Thus the calculation of the embedded energy in the waste material becomes an input to the determination of the blasting strategy. Such feedback loops abound in the overall quarry optimisation. The project has involved research and demonstration operations at a number of quarries distributed across Europe carried out by all partners in the EE-Quarry project, working in collaboration with many of the major quarrying companies operating in the EU. The EE-Quarry project is developing a sophisticated modelling tool, the "EE-Quarry Model" available to the quarrying industry on a web-based platform. This tool guides quarry managers and operators through the complex, multi-layered, iterative, process of assessing the energy efficiency of their own quarry operation. They are able to evaluate the optimisation of the energy-efficiency of the overall quarry through examining both the individual stages of processing, and the interactions between them. The project is also developing on-line distance learning modules designed for Continuous Professional Development (CPD) activities for staff across the quarrying industry in the EU and beyond. The presentation will describe development of the model, and the format and scope of the resulting software tool and its user-support available to the quarrying industry.

  17. Statistical optimisation techniques in fatigue signal editing problem

    NASA Astrophysics Data System (ADS)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-02-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  18. Statistical optimisation techniques in fatigue signal editing problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less

  19. The hydraulic geometry of narrow and deep channels; evidence for flow optimisation and controlled peatland growth

    NASA Astrophysics Data System (ADS)

    Nanson, Rachel A.; Nanson, Gerald C.; Huang, He Qing

    2010-04-01

    At-a-station and bankfull hydraulic geometry analyses of peatland channels at Barrington Tops, New South Wales, Australia, reveal adjustments in self-forming channels in the absence of sediment load. Using Rhodes ternary diagram, comparisons are made with hydraulic geometry data from self-forming channels carrying bedload in alluvial settings elsewhere. Despite constraints on channel depths caused at some locations by the restricted thickness of peat, most stations have cohesive, near-vertical, well-vegetated banks, and width/depth (w/d) ratios of ∼ 2 that are optimal for sediment-free flow. Because banks are strong, resist erosion and can stand nearly vertical, and depth is sometimes constrained, adjustments to discharge are accommodated largely by changes in velocity. These findings are consistent with the model of maximum flow efficiency and the overarching least action principle in open channels. The bankfull depth of freely adjusting laterally active channels in clastic alluvium is well known to be related to the thickness of floodplain alluvium and a similar condition appears to apply to these swamps that grow in situ and are formed almost entirely of organic matter. The thickness of peat in these swamps rarely exceeds that required to form a bankfull channel of optimum w/d ratio for the transport of sediment-free water. Swamp vegetation is highly dependent on proximity to the water table. To maintain a swamp-channel and associated floodplain system, the channels must flow with sufficient water much of the time; they not only offer an efficient morphology for flow but do so in a way that enables bankfull conditions to occur many times a year. They also prevent the swamp from growing above a level linked to the depth of the channel. Once the channel attains the most efficient cross section, further growth of the swamp vertically is restricted by enhanced flow velocities and limited flow depths. This means that the volume of peat in such swamps is determined by the hydraulic efficiency of their channels. The development and maintenance of the hydraulic geometry of these swamp channels is biogeomorphic and biohydraulic in nature and yet accords to the same optimising principles that govern the formation of self-adjusting channels and floodplains in clastic alluvium.

  20. Bright-White Beetle Scales Optimise Multiple Scattering of Light

    NASA Astrophysics Data System (ADS)

    Burresi, Matteo; Cortese, Lorenzo; Pattelli, Lorenzo; Kolle, Mathias; Vukusic, Peter; Wiersma, Diederik S.; Steiner, Ullrich; Vignolini, Silvia

    2014-08-01

    Whiteness arises from diffuse and broadband reflection of light typically achieved through optical scattering in randomly structured media. In contrast to structural colour due to coherent scattering, white appearance generally requires a relatively thick system comprising randomly positioned high refractive-index scattering centres. Here, we show that the exceptionally bright white appearance of Cyphochilus and Lepidiota stigma beetles arises from a remarkably optimised anisotropy of intra-scale chitin networks, which act as a dense scattering media. Using time-resolved measurements, we show that light propagating in the scales of the beetles undergoes pronounced multiple scattering that is associated with the lowest transport mean free path reported to date for low-refractive-index systems. Our light transport investigation unveil high level of optimisation that achieves high-brightness white in a thin low-mass-per-unit-area anisotropic disordered nanostructure.

  1. Investigation of effects of process parameters on properties of friction stir welded joints

    NASA Astrophysics Data System (ADS)

    Chauhan, Atul; Soota, Tarun; Rajput, S. K.

    2018-03-01

    This work deals with application of friction stir welding (FSW) using application of Taguchi orthogonal array. FSW procedure is used for joining the aluminium alloy AA6063-T0 plates in butt configuration with orthogonal combination of factors and their levels. The combination of factors involving tool rotation speed, tool travel speed and tool pin profile are used in three levels. Grey relational analysis (GRA) has been applied to select optimum level of factors for optimising UTS, ductility and hardness of joint. Experiments have been conducted with two different tool materials (HSS and HCHCr steel) with various factors level combinations for joining AA6063-T0. On the basis of grey relational grades at different levels of factors and analysis of variance (ANOVA) ideal combination of factors are determined. The influence of tool material is also studied.

  2. A rapid, automated approach to optimisation of multiple reaction monitoring conditions for quantitative bioanalytical mass spectrometry.

    PubMed

    Higton, D M

    2001-01-01

    An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.

  3. Optimisation of logistics processes of energy grass collection

    NASA Astrophysics Data System (ADS)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu

  4. Proof of concept: performance testing in models.

    PubMed

    Craig, W A

    2004-04-01

    Pharmacokinetic (PK) and pharmacodynamic (PD) principles that predict antimicrobial efficacy can be used to set targets for antimicrobial design and optimisation. Although current formulations of amoxicillin and amoxicillin/clavulanate have retained their efficacy against many, but not all, penicillin-nonsusceptible Streptococcus pneumoniae, additional coverage is required to address the growing problem of drug-resistant strains. Accordingly, two new oral formulations of amoxicillin/clavulanate, a paediatric formulation at 90/6.4 mg/kg/day and a pharmacokinetically enhanced formulation at 2000/125 mg twice daily for adults, were designed using PK/PD principles. These principles indicate that for amoxicillin and amoxicillin/clavulanate, a time above MIC of 35-40% of the dosing interval is predictive of high bacterial efficacy. In line with PK/PD predictions, simulation of human pharmacokinetics in in-vitro kinetic models and in a rat model of pneumonia, amoxicillin/clavulanate 2000/125 mg twice daily was highly effective against S. pneumoniae strains with amoxicillin MICs of 4 or 8 mg/L. Against strains with amoxicillin MICs of 4 mg/L, amoxicillin/clavulanate 2000/125 mg twice daily was significantly more effective than the conventional 875/125 mg twice daily formulation, azithromycin and levofloxacin, even though all levofloxacin MICs were < or = 1 mg/L. Following infection with S. pneumoniae strains with amoxicillin MICs of 8 mg/L, the amoxicillin/clavulanate 2000/125 mg twice daily formulation was more effective than the conventional amoxicillin/clavulanate formulations of 875/125 mg twice daily and three times daily and 1000/125 mg three times daily, and had similar or better efficacy than azithromycin and levofloxacin, depending on the strain. These data indicate the potential benefit of therapy with amoxicillin/clavulanate 2000/125 mg twice daily compared with conventional formulations and other marketed antimicrobials in the treatment of respiratory tract infection.

  5. Skin Blood Perfusion and Oxygenation Colour Affect Perceived Human Health

    PubMed Central

    Stephen, Ian D.; Coetzee, Vinet; Law Smith, Miriam; Perrett, David I.

    2009-01-01

    Skin blood perfusion and oxygenation depends upon cardiovascular, hormonal and circulatory health in humans and provides socio-sexual signals of underlying physiology, dominance and reproductive status in some primates. We allowed participants to manipulate colour calibrated facial photographs along empirically-measured oxygenated and deoxygenated blood colour axes both separately and simultaneously, to optimise healthy appearance. Participants increased skin blood colour, particularly oxygenated, above basal levels to optimise healthy appearance. We show, therefore, that skin blood perfusion and oxygenation influence perceived health in a way that may be important to mate choice. PMID:19337378

  6. Skin blood perfusion and oxygenation colour affect perceived human health.

    PubMed

    Stephen, Ian D; Coetzee, Vinet; Law Smith, Miriam; Perrett, David I

    2009-01-01

    Skin blood perfusion and oxygenation depends upon cardiovascular, hormonal and circulatory health in humans and provides socio-sexual signals of underlying physiology, dominance and reproductive status in some primates. We allowed participants to manipulate colour calibrated facial photographs along empirically-measured oxygenated and deoxygenated blood colour axes both separately and simultaneously, to optimise healthy appearance. Participants increased skin blood colour, particularly oxygenated, above basal levels to optimise healthy appearance. We show, therefore, that skin blood perfusion and oxygenation influence perceived health in a way that may be important to mate choice.

  7. A robust optimisation approach to the problem of supplier selection and allocation in outsourcing

    NASA Astrophysics Data System (ADS)

    Fu, Yelin; Keung Lai, Kin; Liang, Liang

    2016-03-01

    We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.

  8. Hard times, hard choices: founding bioethics today.

    PubMed

    Gracia, Diego

    1995-07-01

    The discussions of these past twenty years have significantly improved our knowledge about the foundation of bioethics and the meaning of the four bioethical principles with concern to at least three different points: that they are organised hierarchically, and therefore not "prima facie" of the same level; that they have exceptions, and consequently lack of absolute character; and that they are neither strictly deontological nor purely teleological. The only absolute principle of moral life can be the abstract and unconcrete respect of human beings. But when determining the material content of this respect, principles become contingent and relative. Therefore, moral reasoning must have necessarily no less than three moments, one absolute but merely formal, namely respect for all human beings, and the other two relative and material. The first material moment is comprised of the four bioethical principles, divided into two levels, one private, including the principles of autonomy and beneficence, and the other one public, including those of nonmaleficence and justice. The second material moment deals with specific cases, and requires analysis of their context, including their circumstances and consequences. Only when following these steps, and therefore balancing principlism and contextualism, can moral reasoning be correct and complete.

  9. Investigating the Trade-Off Between Power Generation and Environmental Impact of Tidal-Turbine Arrays Using Array Layout Optimisation and Habitat Sustainability Modelling.

    NASA Astrophysics Data System (ADS)

    du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.

    2016-12-01

    The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario studied, a range of array formations is found expressing varying preferences for either functional. Further analyses then allow for the identification of trade-offs between the two key societal objectives of energy production and conservation. This in turn produces information valuable to stakeholders and policymakers when making decisions on array design.

  10. Measures of Quantum Synchronization in Continuous Variable Systems

    NASA Astrophysics Data System (ADS)

    Mari, A.; Farace, A.; Didier, N.; Giovannetti, V.; Fazio, R.

    2013-09-01

    We introduce and characterize two different measures which quantify the level of synchronization of coupled continuous variable quantum systems. The two measures allow us to extend to the quantum domain the notions of complete and phase synchronization. The Heisenberg principle sets a universal bound to complete synchronization. The measure of phase synchronization is, in principle, unbounded; however, in the absence of quantum resources (e.g., squeezing) the synchronization level is bounded below a certain threshold. We elucidate some interesting connections between entanglement and synchronization and, finally, discuss an application based on quantum optomechanical systems.

  11. Measures of quantum synchronization in continuous variable systems.

    PubMed

    Mari, A; Farace, A; Didier, N; Giovannetti, V; Fazio, R

    2013-09-06

    We introduce and characterize two different measures which quantify the level of synchronization of coupled continuous variable quantum systems. The two measures allow us to extend to the quantum domain the notions of complete and phase synchronization. The Heisenberg principle sets a universal bound to complete synchronization. The measure of phase synchronization is, in principle, unbounded; however, in the absence of quantum resources (e.g., squeezing) the synchronization level is bounded below a certain threshold. We elucidate some interesting connections between entanglement and synchronization and, finally, discuss an application based on quantum optomechanical systems.

  12. ATLAS software configuration and build tool optimisation

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.

  13. Smart strategies for doctors and doctors-in-training: heuristics in medicine.

    PubMed

    Wegwarth, Odette; Gaissmaier, Wolfgang; Gigerenzer, Gerd

    2009-08-01

    How do doctors make sound decisions when confronted with probabilistic data, time pressures and a heavy workload? One theory that has been embraced by many researchers is based on optimisation, which emphasises the need to integrate all information in order to arrive at sound decisions. This notion makes heuristics, which use less than complete information, appear as second-best strategies. In this article, we challenge this pessimistic view of heuristics. We introduce two medical problems that involve decision making to the reader: one concerns coronary care issues and the other macrolide prescriptions. In both settings, decision-making tools grounded in the principles of optimisation and heuristics, respectively, have been developed to assist doctors in making decisions. We explain the structure of each of these tools and compare their performance in terms of their facilitation of correct predictions. For decisions concerning both the coronary care unit and the prescribing of macrolides, we demonstrate that sacrificing information does not necessarily imply a forfeiting of predictive accuracy, but can sometimes even lead to better decisions. Subsequently, we discuss common misconceptions about heuristics and explain when and why ignoring parts of the available information can lead to the making of more robust predictions. Heuristics are neither good nor bad per se, but, if applied in situations to which they have been adapted, can be helpful companions for doctors and doctors-in-training. This, however, requires that heuristics in medicine be openly discussed, criticised, refined and then taught to doctors-in-training rather than being simply dismissed as harmful or irrelevant. A more uniform use of explicit and accepted heuristics has the potential to reduce variations in diagnoses and to improve medical care for patients.

  14. Data collection and information presentation for optimal decision making by clinical managers--the Autocontrol Project.

    PubMed Central

    Grant, A. M.; Richard, Y.; Deland, E.; Després, N.; de Lorenzi, F.; Dagenais, A.; Buteau, M.

    1997-01-01

    The Autocontrol methodology has been developed in order to support the optimisation of decision-making and the use of resources in the context of a clinical unit. The theoretical basis relates to quality assurance and information systems and is influenced by management and cognitive research in the health domain. The methodology uses population rather than individual decision making and because of its dynamic feedback design promises to have rapid and profound effect on practice. Most importantly the health care professional is the principle user of the Autocontrol system. In this methodology we distinguish three types of evidence necessary for practice change: practice based or internal evidence, best evidence derived from the literature or external evidence concerning the practice in question, and process based evidence on how to optimise the process of practice change. The software used by the system is of the executive decision support type which facilitates interrogation of large databases. The Autocontrol system is designed to interrogate the data of the patient medical record however the latter often lacks data on concomitant resource use and this must be supplemented. This paper reviews the Autocontrol methodology and gives examples from current studies. PMID:9357733

  15. Data collection and information presentation for optimal decision making by clinical managers--the Autocontrol Project.

    PubMed

    Grant, A M; Richard, Y; Deland, E; Després, N; de Lorenzi, F; Dagenais, A; Buteau, M

    1997-01-01

    The Autocontrol methodology has been developed in order to support the optimisation of decision-making and the use of resources in the context of a clinical unit. The theoretical basis relates to quality assurance and information systems and is influenced by management and cognitive research in the health domain. The methodology uses population rather than individual decision making and because of its dynamic feedback design promises to have rapid and profound effect on practice. Most importantly the health care professional is the principle user of the Autocontrol system. In this methodology we distinguish three types of evidence necessary for practice change: practice based or internal evidence, best evidence derived from the literature or external evidence concerning the practice in question, and process based evidence on how to optimise the process of practice change. The software used by the system is of the executive decision support type which facilitates interrogation of large databases. The Autocontrol system is designed to interrogate the data of the patient medical record however the latter often lacks data on concomitant resource use and this must be supplemented. This paper reviews the Autocontrol methodology and gives examples from current studies.

  16. Targeted flock/herd and individual ruminant treatment approaches.

    PubMed

    Kenyon, F; Jackson, F

    2012-05-04

    In Europe, most nematodoses are subclinical involving morbid rather than mortal effects and control is largely achieved using anthelmintics. In cattle, the genera most associated with sub-optimal performance are Ostertagia and Cooperia whereas in sheep and goats, subclinical losses are most often caused by Teladorsagia and Trichostrongylus. In some regions, at certain times, other species such as Nematodirus and Haemonchus also cause disease in sheep and goats. Unfortunately, anthelmintic resistance has now become an issue for European small ruminant producers. One of the key aims of the EU-funded PARASOL project was to identify low input and sustainable approaches to control nematode parasites in ruminants using refugia-based strategies. Two approaches to optimise anthelmintic treatments in sheep and cattle were studied; targeted treatments (TT) - whole-group treatments optimised on the basis of a marker of infection e.g. faecal egg count (FEC), and targeted selected treatment (TST) - treatments given to identified individuals to provide epidemiological and/or production benefits. A number of indicators for TT and TST were assessed to define parasitological and production-system specific indicators for treatment that best suited the regions where the PARASOL studies were conducted. These included liveweight gain, production efficiency, FEC, body condition score and diarrhoea score in small ruminants, and pepsinogen levels and Ostertagia bulk milk tank ELISA in cattle. The PARASOL studies confirmed the value of monitoring FEC as a means of targeting whole-flock treatments in small ruminants. In cattle, bulk milk tank ELISA and serum pepsinogen assays could be used retrospectively to determine the levels of exposure and hence, in the next season to optimise anthelmintic usage. TST approaches in sheep and goats examined production efficiency and liveweight gain as indicators for treatment and confirmed the value of this approach in maintaining performance and anthelmintic susceptibility in the predominant gastrointestinal nematodes. There is good evidence that the TST approach selected less heavily for the development of resistance in comparison to routine monthly treatments. Further research is required to optimise markers for TT and TST but it is also crucial to encourage producers/advisors to adapt these refugia-based strategies to maintain drug susceptible parasites in order to provide sustainable control. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms.

    PubMed

    Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter

    2007-01-01

    A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.

  18. Honeybee economics: optimisation of foraging in a variable world.

    PubMed

    Stabentheiner, Anton; Kovac, Helmut

    2016-06-20

    In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.

  19. Biomass supply chain optimisation for Organosolv-based biorefineries.

    PubMed

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Optimising energy recovery and use of chemicals, resources and materials in modern waste-to-energy plants.

    PubMed

    De Greef, J; Villani, K; Goethals, J; Van Belle, H; Van Caneghem, J; Vandecasteele, C

    2013-11-01

    Due to ongoing developments in the EU waste policy, Waste-to-Energy (WtE) plants are to be optimized beyond current acceptance levels. In this paper, a non-exhaustive overview of advanced technical improvements is presented and illustrated with facts and figures from state-of-the-art combustion plants for municipal solid waste (MSW). Some of the data included originate from regular WtE plant operation - before and after optimisation - as well as from defined plant-scale research. Aspects of energy efficiency and (re-)use of chemicals, resources and materials are discussed and support, in light of best available techniques (BAT), the idea that WtE plant performance still can be improved significantly, without direct need for expensive techniques, tools or re-design. In first instance, diagnostic skills and a thorough understanding of processes and operations allow for reclaiming the silent optimisation potential. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    NASA Astrophysics Data System (ADS)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  2. Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.

    PubMed

    Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella

    2014-11-03

    Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.

  3. Discovery of Rigidified α,β-Unsaturated Imines as New Resistance-breaking Insecticides for Malaria Vector Control.

    PubMed

    Arlt, Alexander; Böhnke, Niels; Horstmann, Sebastian; Vermeer, Arnoldus W P; Werner, Stefan; Velten, Robert

    2016-10-01

    During our continuous search for new resistance-breaking insecticides applicable to malaria vector control, a new class of α,β-unsaturated imines was identified by applying the principle of conformational rigidification as a powerful tool for compound optimisation. Herein we describe the successful synthesis of these compounds and their biological test results. Our lead compound 16 from this insecticidal class outperforms market standards, notably for the control of mosquito strains that exhibit either metabolic or target-site resistance to these established insecticides. In our model system for insecticide-treated mosquito nets the compound reveals long-lasting efficacy for up to several months.

  4. Performance Analysis and Discussion on the Thermoelectric Element Footprint for PV-TE Maximum Power Generation

    NASA Astrophysics Data System (ADS)

    Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson

    2018-06-01

    Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.

  5. Enhancement of γ-aminobutyric acid production in recombinant Corynebacterium glutamicum by co-expressing two glutamate decarboxylase genes from Lactobacillus brevis.

    PubMed

    Shi, Feng; Jiang, Junjun; Li, Yongfu; Li, Youxin; Xie, Yilong

    2013-11-01

    γ-Aminobutyric acid (GABA), a non-protein amino acid, is a bioactive component in the food, feed and pharmaceutical fields. To establish an effective single-step production system for GABA, a recombinant Corynebacterium glutamicum strain co-expressing two glutamate decarboxylase (GAD) genes (gadB1 and gadB2) derived from Lactobacillus brevis Lb85 was constructed. Compared with the GABA production of the gadB1 or gadB2 single-expressing strains, GABA production by the gadB1-gadB2 co-expressing strain increased more than twofold. By optimising urea supplementation, the total production of L-glutamate and GABA increased from 22.57 ± 1.24 to 30.18 ± 1.33 g L⁻¹, and GABA production increased from 4.02 ± 0.95 to 18.66 ± 2.11 g L⁻¹ after 84-h cultivation. Under optimal urea supplementation, L-glutamate continued to be consumed, GABA continued to accumulate after 36 h of fermentation, and the pH level fluctuated. GABA production increased to a maximum level of 27.13 ± 0.54 g L⁻¹ after 120-h flask cultivation and 26.32 g L⁻¹ after 60-h fed-batch fermentation. The conversion ratio of L-glutamate to GABA reached 0.60-0.74 mol mol⁻¹. By co-expressing gadB1 and gadB2 and optimising the urea addition method, C. glutamicum was genetically improved for de novo biosynthesis of GABA from its own accumulated L-glutamate.

  6. Molecular simulation of the thermophysical properties and phase behaviour of impure CO2 relevant to CCS.

    PubMed

    Cresswell, Alexander J; Wheatley, Richard J; Wilkinson, Richard D; Graham, Richard S

    2016-10-20

    Impurities from the CCS chain can greatly influence the physical properties of CO 2 . This has important design, safety and cost implications for the compression, transport and storage of CO 2 . There is an urgent need to understand and predict the properties of impure CO 2 to assist with CCS implementation. However, CCS presents demanding modelling requirements. A suitable model must both accurately and robustly predict CO 2 phase behaviour over a wide range of temperatures and pressures, and maintain that predictive power for CO 2 mixtures with numerous, mutually interacting chemical species. A promising technique to address this task is molecular simulation. It offers a molecular approach, with foundations in firmly established physical principles, along with the potential to predict the wide range of physical properties required for CCS. The quality of predictions from molecular simulation depends on accurate force-fields to describe the interactions between CO 2 and other molecules. Unfortunately, there is currently no universally applicable method to obtain force-fields suitable for molecular simulation. In this paper we present two methods of obtaining force-fields: the first being semi-empirical and the second using ab initio quantum-chemical calculations. In the first approach we optimise the impurity force-field against measurements of the phase and pressure-volume behaviour of CO 2 binary mixtures with N 2 , O 2 , Ar and H 2 . A gradient-free optimiser allows us to use the simulation itself as the underlying model. This leads to accurate and robust predictions under conditions relevant to CCS. In the second approach we use quantum-chemical calculations to produce ab initio evaluations of the interactions between CO 2 and relevant impurities, taking N 2 as an exemplar. We use a modest number of these calculations to train a machine-learning algorithm, known as a Gaussian process, to describe these data. The resulting model is then able to accurately predict a much broader set of ab initio force-field calculations at comparatively low numerical cost. Although our method is not yet ready to be implemented in a molecular simulation, we outline the necessary steps here. Such simulations have the potential to deliver first-principles simulation of the thermodynamic properties of impure CO 2 , without fitting to experimental data.

  7. Analysis and optimisation of a mixed fluid cascade (MFC) process

    NASA Astrophysics Data System (ADS)

    Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng

    2017-04-01

    A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.

  8. Using Cartoons to Transfer Knowledge Concerning the Principles of Work Disability Prevention Among Stakeholders.

    PubMed

    Labrecque, Marie-Elise; Coutu, Marie-France; Durand, Marie-José; Fassier, Jean-Baptiste; Loisel, Patrick

    2016-06-01

    Purpose This study assesses how well two cartoons transfer knowledge of principles of work disability prevention among stakeholders, according to their level of experience. We also document stakeholders' perceptions of the usefulness of the cartoons. Method We performed a descriptive study. Two groups of stakeholders were recruited: (1) experienced (working for more than 2 years in work disability), (2) non-experienced (in training). A self-administered questionnaire with open-ended questions documented stakeholders' understanding of each cartoon box and their perception of the possible usefulness of the cartoons. We transformed qualitative responses into quantitative responses for descriptive purposes. We performed independent t tests to compare the groups' level of understanding, and content analysis for the perception of usefulness. Results Overall, 149 stakeholders (50 experienced and 99 non-experienced) participated and identified 79.4 and 61.4 % of all principles presented in each of the two cartoons respectively. Experienced stakeholders identified more principles compared to non-experienced stakeholders (p = 0.007). Both cartoons were perceived to be useful for knowledge transfer. Conclusions Principles were generally well identified in the cartoons by all participants. Cartoons can be used as an effective tool among stakeholders to achieve a common understanding in order to coordinate their actions.

  9. Crew resource management training in the intensive care unit. A multisite controlled before-after study.

    PubMed

    Kemper, Peter F; de Bruijne, Martine; van Dyck, Cathy; So, Ralph L; Tangkau, Peter; Wagner, Cordula

    2016-08-01

    There is a growing awareness today that adverse events in the intensive care unit (ICU) are more often caused by problems related to non-technical skills than by a lack of technical, or clinical, expertise. Team training, such as crew resource management (CRM), aims to improve these non-technical skills. The present study evaluated the effectiveness of CRM in the ICU. Six ICUs participated in a paired controlled trial, with one pretest and two post-test measurements (after 3 and 12 months). Three ICUs received CRM training and were compared with a matched control unit. The 2-day classroom-based training was delivered to multidisciplinary groups (ie, ICU physicians, nurses, managers). All levels of Kirkpatrick's evaluation framework were assessed using a mixed method design, including questionnaires, observations and routinely administered patient outcome data. Level I-reaction: participants were very positive directly after the training. Level II-learning: attitudes towards behaviour aimed at optimising situational awareness were relatively high at baseline and remained stable. Level III-behaviour: self-reported behaviour aimed at optimising situational awareness improved in the intervention group. No changes were found in observed explicit professional oral communication. Level IV-organisation: patient outcomes were unaffected. Error management culture and job satisfaction improved in the intervention group. Patient safety culture improved in both control and intervention units. We can conclude that CRM, as delivered in the present study, does not change behaviour or patient outcomes by itself, yet changes how participants think about errors and risks. This indicates that CRM requires a combination with other initiatives in order to improve clinical outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    PubMed Central

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718

  11. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    PubMed

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  12. Principles of precision medicine in stroke.

    PubMed

    Hinman, Jason D; Rost, Natalia S; Leung, Thomas W; Montaner, Joan; Muir, Keith W; Brown, Scott; Arenillas, Juan F; Feldmann, Edward; Liebeskind, David S

    2017-01-01

    The era of precision medicine has arrived and conveys tremendous potential, particularly for stroke neurology. The diagnosis of stroke, its underlying aetiology, theranostic strategies, recurrence risk and path to recovery are populated by a series of highly individualised questions. Moreover, the phenotypic complexity of a clinical diagnosis of stroke makes a simple genetic risk assessment only partially informative on an individual basis. The guiding principles of precision medicine in stroke underscore the need to identify, value, organise and analyse the multitude of variables obtained from each individual to generate a precise approach to optimise cerebrovascular health. Existing data may be leveraged with novel technologies, informatics and practical clinical paradigms to apply these principles in stroke and realise the promise of precision medicine. Importantly, precision medicine in stroke will only be realised once efforts to collect, value and synthesise the wealth of data collected in clinical trials and routine care starts. Stroke theranostics, the ultimate vision of synchronising tailored therapeutic strategies based on specific diagnostic data, demand cerebrovascular expertise on big data approaches to clinically relevant paradigms. This review considers such challenges and delineates the principles on a roadmap for rational application of precision medicine to stroke and cerebrovascular health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. Computational materials design of crystalline solids.

    PubMed

    Butler, Keith T; Frost, Jarvist M; Skelton, Jonathan M; Svane, Katrine L; Walsh, Aron

    2016-11-07

    The modelling of materials properties and processes from first principles is becoming sufficiently accurate as to facilitate the design and testing of new systems in silico. Computational materials science is both valuable and increasingly necessary for developing novel functional materials and composites that meet the requirements of next-generation technology. A range of simulation techniques are being developed and applied to problems related to materials for energy generation, storage and conversion including solar cells, nuclear reactors, batteries, fuel cells, and catalytic systems. Such techniques may combine crystal-structure prediction (global optimisation), data mining (materials informatics) and high-throughput screening with elements of machine learning. We explore the development process associated with computational materials design, from setting the requirements and descriptors to the development and testing of new materials. As a case study, we critically review progress in the fields of thermoelectrics and photovoltaics, including the simulation of lattice thermal conductivity and the search for Pb-free hybrid halide perovskites. Finally, a number of universal chemical-design principles are advanced.

  14. A REVIEW OF THE FUNDAMENTAL PRINCIPLES OF RADIATION PROTECTION WHEN APPLIED TO THE PATIENT IN DIAGNOSTIC RADIOLOGY.

    PubMed

    Moores, B Michael

    2017-06-01

    A review of the role and relevance of the principles of radiation protection of the patient in diagnostic radiology as specified by ICRP has been undertaken when diagnostic risks arising from an examination are taken into account. The increase in population doses arising from diagnostic radiology over the past 20 years has been due to the widespread application of higher dose CT examinations that provide significantly more clinical information. Consequently, diagnostic risks as well as radiation risks need to be considered within the patient radiation protection framework. Justification and optimisation are discussed and the limitations imposed on patient protection by employing only a radiation risk framework is highlighted. The example of radiation protection of the patient in breast screening programmes employing mammography is used to highlight the importance of defined diagnostic outcomes in any effective radiation protection strategy. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. A Concise National Report on Educational Developments in the Socialist People's Libyan Arab Jamahirya during the Two Academic Years 1974/75-1975/76.

    ERIC Educational Resources Information Center

    Secretariat of Education, Tripoli (Libya).

    The educational system of Libya is based upon principles set down in the constitution of 1969 and revised by a philosophy of education document in 1974. In accordance with these principles, education is compulsory on the primary and intermediate levels, free on all levels, and should be in accordance with national political and religious…

  16. An integrated modelling and multicriteria analysis approach to managing nitrate diffuse pollution: 2. A case study for a chalk catchment in England.

    PubMed

    Koo, B K; O'Connell, P E

    2006-04-01

    The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.

  17. Dynamic least-cost optimisation of wastewater system remedial works requirements.

    PubMed

    Vojinovic, Z; Solomatine, D; Price, R K

    2006-01-01

    In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.

  18. Performance benchmark of LHCb code on state-of-the-art x86 architectures

    NASA Astrophysics Data System (ADS)

    Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.

    2015-12-01

    For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.

  19. Maximizing the Adjacent Possible in Automata Chemistries.

    PubMed

    Hickinbotham, Simon; Clark, Edward; Nellis, Adam; Stepney, Susan; Clarke, Tim; Young, Peter

    2016-01-01

    Automata chemistries are good vehicles for experimentation in open-ended evolution, but they are by necessity complex systems whose low-level properties require careful design. To aid the process of designing automata chemistries, we develop an abstract model that classifies the features of a chemistry from a physical (bottom up) perspective and from a biological (top down) perspective. There are two levels: things that can evolve, and things that cannot. We equate the evolving level with biology and the non-evolving level with physics. We design our initial organisms in the biology, so they can evolve. We design the physics to facilitate evolvable biologies. This architecture leads to a set of design principles that should be observed when creating an instantiation of the architecture. These principles are Everything Evolves, Everything's Soft, and Everything Dies. To evaluate these ideas, we present experiments in the recently developed Stringmol automata chemistry. We examine the properties of Stringmol with respect to the principles, and so demonstrate the usefulness of the principles in designing automata chemistries.

  20. Screening of marine bacterial producers of polyunsaturated fatty acids and optimisation of production.

    PubMed

    Abd El Razak, Ahmed; Ward, Alan C; Glassey, Jarka

    2014-02-01

    Water samples from three different environments including Mid Atlantic Ridge, Red Sea and Mediterranean Sea were screened in order to isolate new polyunsaturated fatty acids (PUFAs) bacterial producers especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Two hundred and fifty-one isolates were screened for PUFA production and among them the highest number of producers was isolated from the Mid-Atlantic Ridge followed by the Red Sea while no producers were found in the Mediterranean Sea samples. The screening strategy included a simple colourimetric method followed by a confirmation via GC/MS. Among the tested producers, an isolate named 66 was found to be a potentially high PUFA producer producing relatively high levels of EPA in particular. A Plackett-Burman statistical design of experiments was applied to screen a wide number of media components identifying glycerol and whey as components of a production medium. The potential low-cost production medium was optimised by applying a response surface methodology to obtain the highest productivity converting industrial by-products into value-added products. The maximum achieved productivity of EPA was 20 mg/g, 45 mg/l, representing 11% of the total fatty acids, which is approximately five times more than the amount produced prior to optimisation. The production medium composition was 10.79 g/l whey and 6.87 g/l glycerol. To our knowledge, this is the first investigation of potential bacteria PUFA producers from Mediterranean and Red Seas providing an evaluation of a colourimetric screening method as means of rapid screening of a large number of isolates.

  1. Radiation protection recommendations as applied to the disposal of long-lived solid radioactive waste. A report of The International Commission on Radiological Protection.

    PubMed

    1998-01-01

    (79) Waste, by definition, has no benefit. It should be viewed as one aspect of the beneficial practice that gave rise to it. Furthermore, radioactive waste management should be placed in the context of the management of society's waste in general. (80) A major issue in evaluating the acceptability of a disposal system for long-lived solid radioactive waste is that doses or risks may arise from exposures in the distant future. There is uncertainty surrounding any estimate of these doses or risks due to lack of knowledge about future conditions. Such exposures are treated as potential exposures as their magnitude depends on future processes and conditions that have probabilities associated with them. (81) Nevertheless, the Commission recognises a basic principle that individuals and populations in the future should be afforded at least the same level of protection from the action of disposing of radioactive waste today as is the current generation. This implies use of the current quantitative dose and risk criteria derived from considering associated health detriment. Therefore, protection of future generations should be achieved by applying these dose or risk criteria to the estimated future doses or risks in appropriately defined critical groups. These estimates should not be regarded as measures of health detriment beyond times of around several hundreds of years into the future. In the case of these longer time periods, they represent indicators of the protection afforded by the disposal system. (82 Constrained optimisation is the central approach to evaluating the radiological acceptability of a waste disposal system; dose or risk constraints are used rather than dose or risk limits. By this transition from limitation to optimisation, the needs of practical application of the radiological protection system to the disposal of long-lived solid waste disposal are met: determination of acceptability now for exposures that may occur in the distant future. Optimisation should be applied in an iterative manner during the disposal system development process and should particularly cover both site selection and repository design. (83) Two broad categories of exposure situations should be considered: natural processes and human intrusion. The latter only refers to intrusion that is inadvertent. The radiological implications of deliberate intrusion into a repository are the responsibility of the intruder. Assessed doses or risks arising from natural processes should be compared with a dose constraint of 0.3 mSv per year or its risk equivalent of around 10(-5) per year. With regard to human intrusion, the consequences from one or more plausible stylized scenarios should be considered in order to evaluate the resilience of the repository to such events. (84) The Commission considers that in circumstances where human intrusion could lead to doses to those living around the site sufficiently high that intervention on current criteria would almost always be justified, reasonable efforts should be made at the repository development stage to reduce the probability of human intrusion or to limit its consequences. In this respect, the Commission has previously advised that an existing annual dose of around 10 mSv per year may be used as a generic reference level below which intervention is not likely to be justifiable. Conversely, an existing annual dose of around 100 mSv per year may be used as a generic reference level above which intervention should be considered almost always justifiable. Similar considerations apply in situations where the thresholds for deterministic effects in relevant organs are exceeded. (85) Compliance with the constraints can be assessed by utilising either an aggregated risk-oriented approach, with a risk constraint, or a disaggregated dose/probability approach, with a dose constraint, or a combination of both. A similar level of protection can be achieved by any of these approaches; however, more information may

  2. Prokaryotic regulatory systems biology: Common principles governing the functional architectures of Bacillus subtilis and Escherichia coli unveiled by the natural decomposition approach.

    PubMed

    Freyre-González, Julio A; Treviño-Quintanilla, Luis G; Valtierra-Gutiérrez, Ilse A; Gutiérrez-Ríos, Rosa María; Alonso-Pavón, José A

    2012-10-31

    Escherichia coli and Bacillus subtilis are two of the best-studied prokaryotic model organisms. Previous analyses of their transcriptional regulatory networks have shown that they exhibit high plasticity during evolution and suggested that both converge to scale-free-like structures. Nevertheless, beyond this suggestion, no analyses have been carried out to identify the common systems-level components and principles governing these organisms. Here we show that these two phylogenetically distant organisms follow a set of common novel biologically consistent systems principles revealed by the mathematically and biologically founded natural decomposition approach. The discovered common functional architecture is a diamond-shaped, matryoshka-like, three-layer (coordination, processing, and integration) hierarchy exhibiting feedback, which is shaped by four systems-level components: global transcription factors (global TFs), locally autonomous modules, basal machinery and intermodular genes. The first mathematical criterion to identify global TFs, the κ-value, was reassessed on B. subtilis and confirmed its high predictive power by identifying all the previously reported, plus three potential, master regulators and eight sigma factors. The functionally conserved cores of modules, basal cell machinery, and a set of non-orthologous common physiological global responses were identified via both orthologous genes and non-orthologous conserved functions. This study reveals novel common systems principles maintained between two phylogenetically distant organisms and provides a comparison of their lifestyle adaptations. Our results shed new light on the systems-level principles and the fundamental functions required by bacteria to sustain life. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  4. Population pharmacokinetic modelling of rupatadine solution in 6-11 year olds and optimisation of the experimental design in younger children.

    PubMed

    Santamaría, Eva; Estévez, Javier Alejandro; Riba, Jordi; Izquierdo, Iñaki; Valle, Marta

    2017-01-01

    To optimise a pharmacokinetic (PK) study design of rupatadine for 2-5 year olds by using a population PK model developed with data from a study in 6-11 year olds. The design optimisation was driven by the need to avoid children's discomfort in the study. PK data from 6-11 year olds with allergic rhinitis available from a previous study were used to construct a population PK model which we used in simulations to assess the dose to administer in a study in 2-5 year olds. In addition, an optimal design approach was used to determine the most appropriate number of sampling groups, sampling days, total samples and sampling times. A two-compartmental model with first-order absorption and elimination, with clearance dependent on weight adequately described the PK of rupatadine for 6-11 year olds. The dose selected for a trial in 2-5 year olds was 2.5 mg, as it provided a Cmax below the 3 ng/ml threshold. The optimal study design consisted of four groups of children (10 children each), a maximum sampling window of 2 hours in two clinic visits for drawing three samples on day 14 and one on day 28 coinciding with the final examination of the study. A PK study design was optimised in order to prioritise avoidance of discomfort for enrolled 2-5 year olds by taking only four blood samples from each child and minimising the length of hospital stays.

  5. Effect of planecta and ROSE™ on the frequency characteristics of blood pressure-transducer kits.

    PubMed

    Fujiwara, Shigeki; Kawakubo, Yoshifumi; Mori, Satoshi; Tachihara, Keiichi; Toyoguchi, Izumi; Yokoyama, Takeshi

    2015-12-01

    Pressure-transducer kits have frequency characteristics such as natural frequency and damping coefficient, which affect the monitoring accuracy. The aim of the present study was to investigate the effect of planecta ports and a damping device (ROSE™, Argon Medical Devices, TX, USA) on the frequency characteristics of pressure-transducer kits. The FloTrac sensor kit (Edwards Lifesciences, CA, USA) and the DTXplus transducer kit (Argon Medical Devices) were prepared with planecta ports, and their frequency characteristics were tested with or without ROSE™. The natural frequency and damping coefficient of each kit were obtained using frequency characteristics analysis software and evaluated by plotting them on the Gardner's chart. By inserting a planecta port, the natural frequency markedly decreased in both the FloTrac sensor kit (from 40 to 22 Hz) and the DTXplus transducer kit (from 35 to 22 Hz). In both kits with one planecta port, the damping coefficient markedly increased by insertion of ROSE™ from 0.2 to 0.5, optimising frequency characteristics. In both kits with two planecta ports, however, the natural frequency decreased from 22 to 12 Hz. The damping coefficient increased from 0.2 to 0.8 by insertion of ROSE™; however, optimisation was not achieved even by ROSE™ insertion. Planecta ports decrease the natural frequency of the kit. ROSE™ is useful to optimise the frequency characteristics in the kits without or with one planecta port. However, optimisation is difficult with two or more planecta ports, even with the ROSE™ device.

  6. H∞ output tracking control of uncertain and disturbed nonlinear systems based on neural network model

    NASA Astrophysics Data System (ADS)

    Li, Chengcheng; Li, Yuefeng; Wang, Guanglin

    2017-07-01

    The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.

  7. Impact and hardness optimisation of composite materials inspired by the babassu nut (Orbignya speciosa).

    PubMed

    Staufenberg, Gerrit; Graupner, Nina; Müssig, Jörg

    2015-08-20

    The babassu nut is the fruit of the babassu palm Orbignya speciosa. The combination of hardness and impact strength is difficult to acquire for artificial materials, making the babassu nut a promising source for biomimetic inspiration. Unnotched Charpy impact tests, Shore D hardness tests and scanning electron microscopy were used for mechanical and microscopical analysis of the pericarp. Four major principles were found for a biomimetic approach: a hard core ((1); endocarp) is embedded in a soft outer layer of high impact strength ((2); epicarp) and is reinforced with fibres of variable fineness (3), some of which are oriented radial to the core (4). Biomimetic fibre-reinforced composites were produced using abstracted mechanisms of the babassu nut based on regenerated cellulose fibres (lyocell, L) with two different fineness values as reinforcement embedded in a polylactide (PLA) core matrix and polypropylene (PP) based outer layers. The biomimetic fibre composite reaches a significantly higher impact strength that is 1.6 times higher than the reference sample produced from a PLA/PP/L-blend. At the same time the hardness is slightly increased compared to PP/L.

  8. Estimating economic value of agricultural water under changing conditions and the effects of spatial aggregation.

    PubMed

    Medellín-Azuara, Josué; Harou, Julien J; Howitt, Richard E

    2010-11-01

    Given the high proportion of water used for agriculture in certain regions, the economic value of agricultural water can be an important tool for water management and policy development. This value is quantified using economic demand curves for irrigation water. Such demand functions show the incremental contribution of water to agricultural production. Water demand curves are estimated using econometric or optimisation techniques. Calibrated agricultural optimisation models allow the derivation of demand curves using smaller datasets than econometric models. This paper introduces these subject areas then explores the effect of spatial aggregation (upscaling) on the valuation of water for irrigated agriculture. A case study from the Rio Grande-Rio Bravo Basin in North Mexico investigates differences in valuation at farm and regional aggregated levels under four scenarios: technological change, warm-dry climate change, changes in agricultural commodity prices, and water costs for agriculture. The scenarios consider changes due to external shocks or new policies. Positive mathematical programming (PMP), a calibrated optimisation method, is the deductive valuation method used. An exponential cost function is compared to the quadratic cost functions typically used in PMP. Results indicate that the economic value of water at the farm level and the regionally aggregated level are similar, but that the variability and distributional effects of each scenario are affected by aggregation. Moderately aggregated agricultural production models are effective at capturing average-farm adaptation to policy changes and external shocks. Farm-level models best reveal the distribution of scenario impacts. Copyright © 2009 Elsevier B.V. All rights reserved.

  9. The Stock Market and Economic Principles: A Curriculum Project.

    ERIC Educational Resources Information Center

    Savino, Ronald J.

    This paper outlines how to teach macroeconomic principles through mock stock market investing while helping students develop economic awareness, interest, and comprehension on a more sophisticated level. The basic textbook is "The Economy Today" (B. R. Schiller). After two weeks of teaching basic economic concepts and vocabulary, such as…

  10. Impact of the introduction of a specialist critical care pharmacist on the level of pharmaceutical care provided to the critical care unit.

    PubMed

    Richter, Anja; Bates, Ian; Thacker, Meera; Jani, Yogini; O'Farrell, Bryan; Edwards, Caroline; Taylor, Helen; Shulman, Rob

    2016-08-01

    To evaluate the impact of a dedicated specialist critical care pharmacist service on patient care at a UK critical care unit (CCU). Pharmacist intervention data was collected in two phases. Phase 1 was with the provision of a non-specialist pharmacist chart review service and Phase 2 was after the introduction of a specialist dedicated pharmacy service. Two CCUs with established critical care pharmacist services were used as controls. The impact of pharmacist interventions on optimising drug therapy or preventing harm from medication errors was rated on a 4-point scale. There was an increase in the mean daily rate of pharmacist interventions after the introduction of the specialist critical care pharmacist (5.45 versus 2.69 per day, P < 0.0005). The critical care pharmacist intervened on more medication errors preventing potential harm and optimised more medications. There was no significant change to intervention rates at the control sites. Across all study sites the majority of pharmacist interventions were graded to have at least moderate impact on patient care. The introduction of a specialist critical care pharmacist resulted in an increased rate of pharmacist interventions compared to a non-specialist pharmacist service thus improving the quality of patient care. © 2016 The Authors. IJPP © 2016 Royal Pharmaceutical Society.

  11. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    NASA Astrophysics Data System (ADS)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  12. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  13. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  14. A new effective operator for the hybrid algorithm for solving global optimisation problems

    NASA Astrophysics Data System (ADS)

    Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac

    2018-04-01

    Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.

  15. Optimisation of the dosage of tranexamic acid in trauma patients with population pharmacokinetic analysis.

    PubMed

    Grassin-Delyle, S; Theusinger, O M; Albrecht, R; Mueller, S; Spahn, D R; Urien, S; Stein, P

    2018-06-01

    Tranexamic acid is used both pre-hospital and in-hospital as an antifibrinolytic drug to treat or prevent hyperfibrinolysis in trauma patients; dosing, however, remains empirical. We aimed to measure plasma levels of tranexamic acid in patients receiving pre-hospital anti-hyperfibrinolytic therapy and to build a population pharmacokinetic model to propose an optimised dosing regimen. Seventy-three trauma patients were enrolled and each received tranexamic acid 1 g intravenously pre-hospital. A blood sample was drawn after arrival in the emergency department, and we measured the plasma tranexamic acid concentration using liquid chromatography-mass spectrometry, and modelled the data using non-linear mixed effect modelling. Tranexamic acid was administered at a median (IQR [range]) time of 43 (30-55 [5-135]) min after trauma. Plasma tranexamic acid levels were determined on arrival at hospital, 57 (43-70 [20-148]) min after pre-hospital administration of the drug. The measured concentration was 28.7 (21.5-38.5 [8.7-89.0]) μg.ml -1 . Our subjects had sustained severe trauma; injury severity score 20 (16-29 [5-75]), including penetrating injury in 2.8% and isolated traumatic brain injury in 19.7%. The pharmacokinetics were ascribed a two-compartment open model with body-weight as the main covariate. As tranexamic acid concentrations may fall below therapeutic levels during initial hospital treatment, we propose additional dosing schemes to maintain a specific target blood concentration for as long as required. This is the first study to investigate plasma level and pharmacokinetics of tranexamic acid after pre-hospital administration in trauma patients. Our proposed dosing regimen could be used in subsequent clinical trials to better study efficacy and tolerance profiles with controlled blood concentrations. © 2018 The Association of Anaesthetists of Great Britain and Ireland.

  16. The origin of allometric scaling laws in biology from genomes to ecosystems: towards a quantitative unifying theory of biological structure and organization.

    PubMed

    West, Geoffrey B; Brown, James H

    2005-05-01

    Life is the most complex physical phenomenon in the Universe, manifesting an extraordinary diversity of form and function over an enormous scale from the largest animals and plants to the smallest microbes and subcellular units. Despite this many of its most fundamental and complex phenomena scale with size in a surprisingly simple fashion. For example, metabolic rate scales as the 3/4-power of mass over 27 orders of magnitude, from molecular and intracellular levels up to the largest organisms. Similarly, time-scales (such as lifespans and growth rates) and sizes (such as bacterial genome lengths, tree heights and mitochondrial densities) scale with exponents that are typically simple powers of 1/4. The universality and simplicity of these relationships suggest that fundamental universal principles underly much of the coarse-grained generic structure and organisation of living systems. We have proposed a set of principles based on the observation that almost all life is sustained by hierarchical branching networks, which we assume have invariant terminal units, are space-filling and are optimised by the process of natural selection. We show how these general constraints explain quarter power scaling and lead to a quantitative, predictive theory that captures many of the essential features of diverse biological systems. Examples considered include animal circulatory systems, plant vascular systems, growth, mitochondrial densities, and the concept of a universal molecular clock. Temperature considerations, dimensionality and the role of invariants are discussed. Criticisms and controversies associated with this approach are also addressed.

  17. Casuistry and principlism: the convergence of method in biomedical ethics.

    PubMed

    Kuczewski, M

    1998-12-01

    Casuistry and principlism are two of the leading contenders to be considered the methodology of bioethics. These methods may be incommensurable since the former emphasizes the examination of cases while the latter focuses on moral principles. Conversely, since both analyze cases in terms of mid-level principles, there is hope that these methods may be reconcilable or complementary. I analyze the role of principles in each and thereby show that these theories are virtually identical when interpreted in a certain light. That is, if the gaps in each method are filled by a concept of judgment or Aristotelian practical wisdom, these methods converge.

  18. Optimised mounting conditions for poly (ether sulfone) in radiation detection.

    PubMed

    Nakamura, Hidehito; Shirakawa, Yoshiyuki; Sato, Nobuhiro; Yamada, Tatsuya; Kitamura, Hisashi; Takahashi, Sentaro

    2014-09-01

    Poly (ether sulfone) (PES) is a candidate for use as a scintillation material in radiation detection. Its characteristics, such as its emission spectrum and its effective refractive index (based on the emission spectrum), directly affect the propagation of light generated to external photodetectors. It is also important to examine the presence of background radiation sources in manufactured PES. Here, we optimise the optical coupling and surface treatment of the PES, and characterise its background. Optical grease was used to enhance the optical coupling between the PES and the photodetector; absorption by the grease of short-wavelength light emitted from PES was negligible. Diffuse reflection induced by surface roughening increased the light yield for PES, despite the high effective refractive index. Background radiation derived from the PES sample and its impurities was negligible above the ambient, natural level. Overall, these results serve to optimise the mounting conditions for PES in radiation detection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Effects of aircraft cabin noise on passenger comfort.

    PubMed

    Pennig, Sibylle; Quehl, Julia; Rolny, Vinzent

    2012-01-01

    The effects of cabin noise on subjective comfort assessments were systematically investigated in order to reveal optimisation potentials for an improved passenger noise acceptance. Two aircraft simulation studies were conducted. An acoustic laboratory test facility provided with loudspeaker systems for realistic sound presentations and an aircraft cabin simulator (Dornier Do 728) with a high degree of ecological validity were used. Subjects were exposed to nine different noise patterns (three noise levels ranging from 66 to 78 dB(A) combined with three different frequency spectra). Regression analysis demonstrated a significant increase of passengers' acceptance with lower noise levels and significant effects of different frequency spectra determined by seat position in the aircraft cabin (front, middle, rear). Acoustic cabin design should therefore consider measures beyond noise level reduction altering noise characteristics to improve passengers' comfort and well-being in the aircraft cabin. To improve passenger comfort in the aircraft with respect to cabin noise, passengers' reactions to specific noise conditions were systematically investigated. Two laboratory studies showed significant dose-response relationships between sound pressure level and subjective comfort ratings which differed due to the noise at specific seat positions in the aircraft.

  20. Hybrid real-code ant colony optimisation for constrained mechanical design

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  1. Optimisation of rocker sole footwear for prevention of first plantar ulcer: comparison of group-optimised and individually-selected footwear designs.

    PubMed

    Preece, Stephen J; Chapman, Jonathan D; Braunstein, Bjoern; Brüggemann, Gert-Peter; Nester, Christopher J

    2017-01-01

    Appropriate footwear for individuals with diabetes but no ulceration history could reduce the risk of first ulceration. However, individuals who deem themselves at low risk are unlikely to seek out bespoke footwear which is personalised. Therefore, our primary aim was to investigate whether group-optimised footwear designs, which could be prefabricated and delivered in a retail setting, could achieve appropriate pressure reduction, or whether footwear selection must be on a patient-by-patient basis. A second aim was to compare responses to footwear design between healthy participants and people with diabetes in order to understand the transferability of previous footwear research, performed in healthy populations. Plantar pressures were recorded from 102 individuals with diabetes, considered at low risk of ulceration. This cohort included 17 individuals with peripheral neuropathy. We also collected data from 66 healthy controls. Each participant walked in 8 rocker shoe designs (4 apex positions × 2 rocker angles). ANOVA analysis was then used to understand the effect of two design features and descriptive statistics used to identify the group-optimised design. Using 200 kPa as a target, this group-optimised design was then compared to the design identified as the best for each participant (using plantar pressure data). Peak plantar pressure increased significantly as apex position was moved distally and rocker angle reduced ( p  < 0.001). The group-optimised design incorporated an apex at 52% of shoe length, a 20° rocker angle and an apex angle of 95°. With this design 71-81% of peak pressures were below the 200 kPa threshold, both in the full cohort of individuals with diabetes and also in the neuropathic subgroup. Importantly, only small increases (<5%) in this proportion were observed when participants wore footwear which was individually selected. In terms of optimised footwear designs, healthy participants demonstrated the same response as participants with diabetes, despite having lower plantar pressures. This is the first study demonstrating that a group-optimised, generic rocker shoe might perform almost as well as footwear selected on a patient by patient basis in a low risk patient group. This work provides a starting point for clinical evaluation of generic versus personalised pressure reducing footwear.

  2. On an efficient multilevel inverter assembly: structural savings and design optimisations

    NASA Astrophysics Data System (ADS)

    Choupan, Reza; Nazarpour, Daryoush; Golshannavaz, Sajjad

    2018-01-01

    This study puts forward an efficient unit cell to be taken in use in multilevel inverter assemblies. The proposed structure is in line with reductions in number of direct current (dc) voltage sources, insulated-gate bipolar transistors (IGBTs), gate driver circuits, installation area, and hence the implementation costs. Such structural savings do not sacrifice the technical performance of the proposed design wherein an increased number of output voltage levels is attained, interestingly. Targeting a techno-economic characteristic, the contemplated structure is included as the key unit of cascaded multilevel inverters. Such extensions require development of applicable design procedures. To this end, two efficient strategies are elaborated to determine the magnitudes of input dc voltage sources. As well, an optimisation process is developed to explore the optimal allocation of different parameters in overall performance of the proposed inverter. These parameters are investigated as the number of IGBTs, dc sources, diodes, and overall blocked voltage on switches. In the lights of these characteristics, a comprehensive analysis is established to compare the proposed design with the conventional and recently developed structures. Detailed simulation and experimental studies are conducted to assess the performance of the proposed design. The obtained results are discussed in depth.

  3. Low-temperature high-Z gamma-detectors with very high energy resolution

    NASA Astrophysics Data System (ADS)

    Pobes, Carlos; Brofferio, Chiara; Bucci, Carlo; Cremonesi, Oliviero; Fiorini, Ettore; Giuliani, Andrea; Nucciotti, Angelo; Pavan, Maura; Pedretti, Marisa; Pessina, Gianluigi; Pirro, Stefano; Previtali, Ezio; Sisti, Monica; Vanzini, Marco; Zanotti, Luigi

    2001-12-01

    High-Z low-temperature calorimeters are developed by an Italian collaboration (Milano-Como-Gran Sasso Underground Laboratories) in order to search for rare nuclear events and Dark Matter massive candidates. They exhibit an excellent energy resolution, close to that of Ge-diodes, but a much higher efficiency. Different high-Z materials were initially employed . A many-years optimisation work on tellurium oxide (TeO2) lead to impressive results: devices with total masses around 750 g present FWHM energy resolutions on gamma-ray peaks ranging from 1 KeV (close to the 5 KeV energy threshold) to 2.6 KeV at 2615 KeV (208Tl gamma line). A 3.2 KeV FWHM energy resolution was obtained at 5.4 MeV (210Po alpha line), which is by far the best one ever achieved with any alpha detector. These devices, operated at about 10 mK, consist of a TeO2 single crystal thermally coupled to a 50 mg Neutron Transmutation Doped (NTD) Ge crystal working as a temperature sensor. Special care was devoted to methods for response linearization and temporal stabilisation. Devices based on the same principle and specifically optimised could find applications in several fields like gamma-ray astrophysics, nuclear physics searches, environmental monitoring and radiation metrology.

  4. Developing a spinal cord injury research strategy using a structured process of evidence review and stakeholder dialogue. Part III: outcomes.

    PubMed

    Middleton, J W; Piccenna, L; Lindsay Gruen, R; Williams, S; Creasey, G; Dunlop, S; Brown, D; Batchelor, P E; Berlowitz, D J; Coates, S; Dunn, J A; Furness, J B; Galea, M P; Geraghty, T; Kwon, B K; Urquhart, S; Yates, D; Bragge, P

    2015-10-01

    Focus Group. To develop a unified, regional spinal cord injury (SCI) research strategy for Australia and New Zealand. Australia. A 1-day structured stakeholder dialogue was convened in 2013 in Melbourne, Australia, by the National Trauma Research Institute in collaboration with the SCI Network of Australia and New Zealand. Twenty-three experts participated, representing local and international research, clinical, consumer, advocacy, government policy and funding perspectives. Preparatory work synthesised evidence and articulated draft principles and options as a starting point for discussion. A regional SCI research strategy was proposed, whose objectives can be summarised under four themes. (1) Collaborative networks and strategic partnerships to increase efficiency, reduce duplication, build capacity and optimise research funding. (2) Research priority setting and coordination to manage competing studies. (3) Mechanisms for greater consumer engagement in research. (4) Resources and infrastructure to further develop SCI data registries, evaluate research translation and assess alignment of research strategy with stakeholder interests. These are consistent with contemporary international SCI research strategy development activities. This first step in a regional SCI research strategy has articulated objectives for further development by the wider SCI research community. The initiative has also reinforced the importance of coordinated, collective action in optimising outcomes following SCI.

  5. How to optimise antimicrobial prescriptions in the Intensive Care Unit: principles of individualised dosing using pharmacokinetics and pharmacodynamics.

    PubMed

    Roberts, Jason A; Joynt, Gavin M; Choi, Gordon Y S; Gomersall, Charles D; Lipman, Jeffrey

    2012-03-01

    Optimising antimicrobial dosing for critically ill patients is highly challenging and when it is not achieved can lead to worse patient outcomes. To this end, use of dosing regimens recommended in package inserts from drug manufacturers is frequently insufficient to guide dosing in these patients appropriately. Whilst the effect of critical illness pathophysiology on the pharmacokinetic (PK) behaviour of antimicrobials can be profound, the variability of these changes between patients is still being quantified. The PK effects of hypoproteinaemia, organ dysfunction and the presence of augmented renal clearance may lead to plasma antimicrobial concentrations that are difficult to predict at the bedside, which may result in excess toxicity or suboptimal bacterial killing. This paper outlines the factors that affect pharmacokinetics in critically ill patients and how knowledge of these factors can increase the likelihood of achieving optimal antimicrobial plasma concentrations. In selected settings, we advocate individualised dosing of renally cleared antimicrobials using physiological data such as measured creatinine clearance and published non-renal clearance data. Where such data do not exist, therapeutic drug monitoring may be a useful alternative and has been associated with significant clinical benefits, although it is not currently widely available. Copyright © 2011 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.

  6. Optimal maintenance policy incorporating system level and unit level for mechanical systems

    NASA Astrophysics Data System (ADS)

    Duan, Chaoqun; Deng, Chao; Wang, Bingran

    2018-04-01

    The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.

  7. From Barrier Free to Safe Environments: The New Zealand Experience. Monograph #44.

    ERIC Educational Resources Information Center

    Wrightson, William; Pope, Campbell

    Intrinsically safe design is presented as a logical extension of the principles of barrier free design, and as a higher level design strategy for effecting widespread implementation of the basic accessibility requirements for people with disabilities. Two fundamental planning procedures are proposed: including principles of safe and accessible…

  8. Optimisation of a parallel ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  9. The effect of resistance level and stability demands on recruitment patterns and internal loading of spine in dynamic flexion and extension using a simple trunk model.

    PubMed

    Zeinali-Davarani, Shahrokh; Shirazi-Adl, Aboulfazl; Dariush, Behzad; Hemami, Hooshang; Parnianpour, Mohamad

    2011-07-01

    The effects of external resistance on the recruitment of trunk muscles in sagittal movements and the coactivation mechanism to maintain spinal stability were investigated using a simple computational model of iso-resistive spine sagittal movements. Neural excitation of muscles was attained based on inverse dynamics approach along with a stability-based optimisation. The trunk flexion and extension movements between 60° flexion and the upright posture against various resistance levels were simulated. Incorporation of the stability constraint in the optimisation algorithm required higher antagonistic activities for all resistance levels mostly close to the upright position. Extension movements showed higher coactivation with higher resistance, whereas flexion movements demonstrated lower coactivation indicating a greater stability demand in backward extension movements against higher resistance at the neighbourhood of the upright posture. Optimal extension profiles based on minimum jerk, work and power had distinct kinematics profiles which led to recruitment patterns with different timing and amplitude of activation.

  10. Bioactive amines in sorghum: method optimisation and influence of line, tannin and hydric stress.

    PubMed

    Paiva, Caroline Liboreiro; Evangelista, Warlley Pinheiro; Queiroz, Valéria Aparecida Vieira; Glória, Maria Beatriz Abreu

    2015-04-15

    The profile and levels of bioactive amines in different sorghum lines were reported for the first time. The amines were quantified by ion-pair HPLC, post-column derivatisation with o-phthalaldehyde and fluorimetric detection. The extraction procedure was optimised: 420 μm particle size, extraction with 5% trichloroacetic acid and three extractions. The screening of 22 sorghum lines showed that four of the ten amines investigated were detected. Spermine and spermidine were the prevalent amines (100%), followed by putrescine (77%) and cadaverine (14%). Total amines ranged from 5.8 to 41.4 mg/100 g, and the polyamines represented 60-100% of the total. Sorghum without tannin had higher amines levels compared to sorghum with tannin and cadaverine was specific to samples without tannin. Hydric stress caused accumulation of spermidine in the grains and affected the levels of other amines at rates depending on the presence or not of tannin. Sorghum is a significant source of polyamines. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  12. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.

  13. Development of an interprofessional lean facilitator assessment scale.

    PubMed

    Bravo-Sanchez, Cindy; Dorazio, Vincent; Denmark, Robert; Heuer, Albert J; Parrott, J Scott

    2018-05-01

    High reliability is important for optimising quality and safety in healthcare organisations. Reliability efforts include interprofessional collaborative practice (IPCP) and Lean quality/process improvement strategies, which require skilful facilitation. Currently, no validated Lean facilitator assessment tool for interprofessional collaboration exists. This article describes the development and pilot evaluation of such a tool; the Interprofessional Lean Facilitator Assessment Scale (ILFAS), which measures both technical and 'soft' skills, which have not been measured in other instruments. The ILFAS was developed using methodologies and principles from Lean/Shingo, IPCP, metacognition research and Bloom's Taxonomy of Learning Domains. A panel of experts confirmed the initial face validity of the instrument. Researchers independently assessed five facilitators, during six Lean sessions. Analysis included quantitative evaluation of rater agreement. Overall inter-rater agreement of the assessment of facilitator performance was high (92%), and discrepancies in the agreement statistics were analysed. Face and content validity were further established, and usability was evaluated, through primary stakeholder post-pilot feedback, uncovering minor concerns, leading to tool revision. The ILFAS appears comprehensive in the assessment of facilitator knowledge, skills, abilities, and may be useful in the discrimination between facilitators of different skill levels. Further study is needed to explore instrument performance and validity.

  14. Community engagement for paediatric MDR-TB clinical trials: principles to support ethical trial implementation.

    PubMed

    Hoddinott, G; Staples, S; Brown, R; Simwinga, M; Mubekapi-Musadaidzwa, C; Hesseling, A C; Hendricks, G; De Koker, P; McKenna, L

    2018-05-01

    The paediatric tuberculosis (TB) prevention and treatment landscape is moving into a new and exciting era, with knowledge from clinical trials offering real benefit to children. Community engagement is key to optimising the success of these trials. However, the clinical profile, epidemiology and social perceptions for paediatric multidrug-resistant TB (MDR-TB) complicate the operationalisation of this community engagement. We reflect on a diversity of recent experiences attempting to implement this type of research and the community engagement around it. We describe four recommendations and argue that these should guide the implementation of the community engagement agenda in the new landscape of paediatric MDR-TB clinical trials. Specifically, we argue for 1) dynamic, long-term continuity in community engagement platforms; 2) tiers of TB and research literacy; 3) multiple separate and joint platforms for holding 'stakes'; and 4) addressing the social/structural implications of family participation. We conclude that community-level stakeholders, such as health workers, parents and children, are willing to collaborate in paediatric MDR-TB clinical trials. Using these recommendations, there is considerable opportunity for effective community engagement in this new era of paediatric MDR-TB research.

  15. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  16. Application of the adjoint optimisation of shock control bump for ONERA-M6 wing

    NASA Astrophysics Data System (ADS)

    Nejati, A.; Mazaheri, K.

    2017-11-01

    This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.

  17. CMOS analogue amplifier circuits optimisation using hybrid backtracking search algorithm with differential evolution

    NASA Astrophysics Data System (ADS)

    Mallick, S.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2016-07-01

    This paper proposes a novel hybrid optimisation algorithm which combines the recently proposed evolutionary algorithm Backtracking Search Algorithm (BSA) with another widely accepted evolutionary algorithm, namely, Differential Evolution (DE). The proposed algorithm called BSA-DE is employed for the optimal designs of two commonly used analogue circuits, namely Complementary Metal Oxide Semiconductor (CMOS) differential amplifier circuit with current mirror load and CMOS two-stage operational amplifier (op-amp) circuit. BSA has a simple structure that is effective, fast and capable of solving multimodal problems. DE is a stochastic, population-based heuristic approach, having the capability to solve global optimisation problems. In this paper, the transistors' sizes are optimised using the proposed BSA-DE to minimise the areas occupied by the circuits and to improve the performances of the circuits. The simulation results justify the superiority of BSA-DE in global convergence properties and fine tuning ability, and prove it to be a promising candidate for the optimal design of the analogue CMOS amplifier circuits. The simulation results obtained for both the amplifier circuits prove the effectiveness of the proposed BSA-DE-based approach over DE, harmony search (HS), artificial bee colony (ABC) and PSO in terms of convergence speed, design specifications and design parameters of the optimal design of the analogue CMOS amplifier circuits. It is shown that BSA-DE-based design technique for each amplifier circuit yields the least MOS transistor area, and each designed circuit is shown to have the best performance parameters such as gain, power dissipation, etc., as compared with those of other recently reported literature.

  18. Validation of a multi-criteria evaluation model for animal welfare.

    PubMed

    Martín, P; Czycholl, I; Buxadé, C; Krieter, J

    2017-04-01

    The aim of this paper was to validate an alternative multi-criteria evaluation system to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. This alternative methodology aimed to be more transparent for stakeholders and more flexible than the methodology proposed by WQ. The WQ assessment protocol for growing pigs was implemented to collect data in different farms in Schleswig-Holstein, Germany. In total, 44 observations were carried out. The aggregation system proposed in the WQ protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first two steps of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion and principle. The utility functions and the aggregation function were constructed in two separated steps. The MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) method was used for utility function determination and the Choquet integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The validation of the MAUT model was divided into two steps, first, the results of the model were compared with the results of the WQ project at criteria and principle level, and second, a sensitivity analysis of our model was carried out to demonstrate the relative importance of welfare measures in the different steps of the multi-criteria aggregation process. Using the MAUT, similar results were obtained to those obtained when applying the WQ protocol aggregation methods, both at criteria and principle level. Thus, this model could be implemented to produce an overall assessment of animal welfare in the context of the WQ protocol for growing pigs. Furthermore, this methodology could also be used as a framework in order to produce an overall assessment of welfare for other livestock species. Two main findings are obtained from the sensitivity analysis, first, a limited number of measures had a strong influence on improving or worsening the level of welfare at criteria level and second, the MAUT model was not very sensitive to an improvement in or a worsening of single welfare measures at principle level. The use of weighted sums and the conversion of disease measures into ordinal scores should be reconsidered.

  19. Evaluation of exposure parameters in plain radiography: a comparative study with European guidelines.

    PubMed

    Lança, L; Silva, A; Alves, E; Serranheira, F; Correia, M

    2008-01-01

    Typical distribution of exposure parameters in plain radiography is unknown in Portugal. This study aims to identify exposure parameters that are being used in plain radiography in the Lisbon area and to compare the collected data with European references [Commission of European Communities (CEC) guidelines]. The results show that in four examinations (skull, chest, lumbar spine and pelvis), there is a strong tendency of using exposure times above the European recommendation. The X-ray tube potential values (in kV) are below the recommended values from CEC guidelines. This study shows that at a local level (Lisbon region), radiographic practice does not comply with CEC guidelines concerning exposure techniques. Further national/local studies are recommended with the objective to improve exposure optimisation and technical procedures in plain radiography. This study also suggests the need to establish national/local diagnostic reference levels and to proceed to effective measurements for exposure optimisation.

  20. Malaysian Secondary Physics Teachers' Perceptions towards the Teaching and Learning of Archimedes' Principle

    NASA Astrophysics Data System (ADS)

    Salleh, Khalijah Mohd; Abdullah, Abu Bakar Bin

    2008-05-01

    An explorative study was carried out to confirm Malaysian Physics teachers' perception that Archimedes' principle is a difficult topic for secondary level students. The interview method was used for data collection. The study sample was made of nine national secondary schools teachers from Miri, Sarawak. The data was analysed qualitatively using the Atlas-ti version 5.2 software. The findings of the study showed that i) Archimedes' principle as compared to Bernoulli's and Pascal's is the most difficult principle of hydrodynamics for students, ii) more time was given in the teaching and learning (TL) of Archimedes principle compared to the other two principles, iii) the major TL problems include conceptual understanding, application of physics principles and ideas, and lack of mathematical skills. These findings implicate the need to develop corresponding instructional materials and learning kits that can assist students' understanding of Archimedes' principle.

  1. Statistical optimisation of cell growth and carotenoid production by rhodotorula mucilaginosa

    PubMed Central

    Maldonade, Iriani R.; Rodriguez-Amaya, Delia B.; Scamparini, Adilma R. P.

    2012-01-01

    Sequential statistical methods were used to maximise carotenoid production by a strain of Rhodotorula mucilaginosa, isolated from the Brazilian ecosystem. Initially, a factorial 25–1 experimental design was used, and the variables were pH and the levels of glucose, yeast extract, MgSO4.7H2O and KH2PO4. The nitrogen source (yeast extract) was the most important variable in enhancing carotenoid production; MgSO4.7H2O and KH2PO4 had a negative influence. The initial pH had no significant effect on carotenoid and cell productions. We further investigated the effects of glucose and yeast extract effects, using a second-order central composite design (CCD) to optimise carotenoid production, which was adequately approximated with a full quadratic equation obtained from a two-factor-2-level design. The analysis of quadratic surfaces showed that after 5 days of cultivation at 25 °C, the maximum carotenoid concentration (745 µg l-1) was obtained with 15 g l-1 of yeast extract and 20 g l-1 of glucose. The maximum carotenoid production (152 µg g-1) was obtained with 5 g l-1 yeast extract and 10 g l-1 glucose. Carotenoid formation was more sensitive to changes in yeast extract concentration than to changes in glucose concentration. Maximum cell production was achieved with 15–17 g l-1 of yeast extract and 15–20 g l-1 of glucose. PMID:24031809

  2. The Role of Geophysics/Geology in the Environmental Discourse

    NASA Astrophysics Data System (ADS)

    Pfannkuch, H. O.

    2013-12-01

    Environmental problems are created by interaction between the Anthroposphere and Geosphere. Principles and laws governing behavior and interaction between them have to be fully understood to properly address environmental problems. A particular problem arises by inadequate communication between practitioners and/or decision makers in each sphere. A perfect analysis or solution in the Geosphere based solely on geophysical, geochemical principles will go nowhere if institutional, socio economic principles are ignored, or vice versa: no matter how well socio-economic relations are used in the Anthroposphere if they violate basic laws of physics . Two conceptual representations of the environment system are: Nöosphere with three domains: Physical, Institutional, Symbolic and their interactions. It is where environmental problems arise, decisions are made and implemented. The Physical Domain comprises physical, chemical, biological, geopsphere realities. Problems are treated by the scientific method. The Institutional Domain with economy, sociology, administration and political institutions, solves by negotiation (vote, ballot). The elements of the Symbolic Domain. spiritual, moral, religious, esthetic principles are revealed. All are intimately connected and interdependent. Activity in one affects the state of the others. A particularly strong and effective interactive relation exists between the Physical and the Institutional domains with regards to environmental problem definition, analysis and resolution. Hierarchic structure of interaction pyramid. Geosphere, Biosphere and Anthroposphere are open systems and are organized in successive levels forming a pyramid shape or aspect. The Geosphere forms the bottom level, the Anthroposphere the top. One fundamental attribute is that level (n) is limited by the restrictions obtaining in level (n-1), and conversely, level (n) represents the restrictions and limitations for level (n+1). In the environmental discourse this structural aspect is often overlooked which leads to two major sets of fallacies: 1. Discourse takes place across hierarchic levels with the unstated assumption that from the viewpoint of level (n) the same conditions, rules, equations and models hold in level (n-1) as on level (n) and are similarly compatible and follow the same rules. This leads to misunderstanding or misrepresentation of what analysis, modeling and solution methods at this level would be appropriate. 2. The fact that the bottom Geosphere level is the base level onto which all other levels are stacked including the topmost Anthroposphere. Each of the successive layers is restricted by the limitations of the Geosphere layer. Institutional and physical scientific realities both have to realize that solutions or redress are not solely within their domain. No matter what the economic or socio-political preferences might be, they cannot be implemented by violating fundamental physical, geological geo-ecological principles, nor can the physical world ignore currently acceptable principles of the institutional and symbolic realities. The role of Geophysics/Geology in the environmental discourse is to clearly state and apply the physical and thermodynamic principles to the Geosphere and Noösphere.

  3. Design, construction and mechanical optimisation process of electrode with radial current flow in the scala tympani.

    PubMed

    Deman, P R; Kaiser, T M; Dirckx, J J; Offeciers, F E; Peeters, S A

    2003-09-30

    A 48 contact cochlear implant electrode has been constructed for electrical stimulation of the auditory nerve. The stimulating contacts of this electrode are organised in two layers: 31 contacts on the upper surface directed towards the habenula perforata and 17 contacts connected together as one longitudinal contact on the underside. The design of the electrode carrier aims to make radial current flow possible in the cochlea. The mechanical structure of the newly designed electrode was optimised to obtain maximal insertion depth. Electrode insertion tests were performed in a transparent acrylic model of the human cochlea.

  4. Machine learning prediction for classification of outcomes in local minimisation

    NASA Astrophysics Data System (ADS)

    Das, Ritankar; Wales, David J.

    2017-01-01

    Machine learning schemes are employed to predict which local minimum will result from local energy minimisation of random starting configurations for a triatomic cluster. The input data consists of structural information at one or more of the configurations in optimisation sequences that converge to one of four distinct local minima. The ability to make reliable predictions, in terms of the energy or other properties of interest, could save significant computational resources in sampling procedures that involve systematic geometry optimisation. Results are compared for two energy minimisation schemes, and for neural network and quadratic functions of the inputs.

  5. Cost optimisation and minimisation of the environmental impact through life cycle analysis of the waste water treatment plant of Bree (Belgium).

    PubMed

    De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B

    2011-01-01

    An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.

  6. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  7. Quantum test of the equivalence principle for atoms in coherent superposition of internal energy states

    PubMed Central

    Rosi, G.; D'Amico, G.; Cacciapuoti, L.; Sorrentino, F.; Prevedelli, M.; Zych, M.; Brukner, Č.; Tino, G. M.

    2017-01-01

    The Einstein equivalence principle (EEP) has a central role in the understanding of gravity and space–time. In its weak form, or weak equivalence principle (WEP), it directly implies equivalence between inertial and gravitational mass. Verifying this principle in a regime where the relevant properties of the test body must be described by quantum theory has profound implications. Here we report on a novel WEP test for atoms: a Bragg atom interferometer in a gravity gradiometer configuration compares the free fall of rubidium atoms prepared in two hyperfine states and in their coherent superposition. The use of the superposition state allows testing genuine quantum aspects of EEP with no classical analogue, which have remained completely unexplored so far. In addition, we measure the Eötvös ratio of atoms in two hyperfine levels with relative uncertainty in the low 10−9, improving previous results by almost two orders of magnitude. PMID:28569742

  8. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  9. An animal welfare perspective on animal testing of GMO crops.

    PubMed

    Kolar, Roman; Rusche, Brigitte

    2008-01-01

    The public discussion on the introduction of agro-genetic engineering focuses mainly on economical, ecological and human health aspects. The fact is neglected that laboratory animals must suffer before either humans or the environment are affected. However, numerous animal experiments are conducted for toxicity testing and authorisation of genetically modified plants in the European Union. These are ethically questionable, because death and suffering of the animals for purely commercial purposes are accepted. Therefore, recent political initiatives to further increase animal testing for GMO crops must be regarded highly critically. Based on concrete examples this article demonstrates that animal experiments, on principle, cannot provide the expected protection of users and consumers despite all efforts to standardise, optimise or extend them.

  10. Principles and Applications of Liquid Chromatography-Mass Spectrometry in Clinical Biochemistry

    PubMed Central

    Pitt, James J

    2009-01-01

    Liquid chromatography-mass spectrometry (LC-MS) is now a routine technique with the development of electrospray ionisation (ESI) providing a simple and robust interface. It can be applied to a wide range of biological molecules and the use of tandem MS and stable isotope internal standards allows highly sensitive and accurate assays to be developed although some method optimisation is required to minimise ion suppression effects. Fast scanning speeds allow a high degree of multiplexing and many compounds can be measured in a single analytical run. With the development of more affordable and reliable instruments, LC-MS is starting to play an important role in several areas of clinical biochemistry and compete with conventional liquid chromatography and other techniques such as immunoassay. PMID:19224008

  11. Optimisation of radiation dose and image quality in mobile neonatal chest radiography.

    PubMed

    Hinojos-Armendáriz, V I; Mejía-Rosales, S J; Franco-Cabrera, M C

    2018-05-01

    To optimise the radiation dose and image quality for chest radiography in the neonatal intensive care unit (NICU) by increasing the mean beam energy. Two techniques for the acquisition of NICU AP chest X-ray images were compared for image quality and radiation dose. 73 images were acquired using a standard technique (56 kV, 3.2 mAs and no additional filtration) and 90 images with a new technique (62 kV, 2 mAs and 2 mm Al filtration). The entrance surface air kerma (ESAK) was measured using a phantom and compared between the techniques and against established diagnostic reference levels (DRL). Images were evaluated using seven image quality criteria independently by three radiologists. Images quality and radiation dose were compared statistically between the standard and new techniques. The maximum ESAK for the new technique was 40.20 μGy, 43.7% of the ESAK of the standard technique. Statistical evaluation demonstrated no significant differences in image quality between the two acquisition techniques. Based on the techniques and acquisition factors investigated within this study, it is possible to lower the radiation dose without any significant effects on image quality by adding filtration (2 mm Al) and increasing the tube potential. Such steps are relatively simple to undertake and as such, other departments should consider testing and implementing this dose reduction strategy within clinical practice where appropriate. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  12. Optimisation of nutritional requirements for dopamine synthesis by calcium alginate-entrapped mutant strain of Aspergillus oryzae EMS-6.

    PubMed

    Ali, Sikander; Nawaz, Wajeeha

    2017-02-01

    The optimisation of nutritional requirements for dopamine (DA) synthesis by calcium alginate-entrapped mutant variant of Aspergillus oryzae EMS-6 using submerged fermentation technique was investigated. A total of 13 strains were isolated from soil. Isolate I-2 was selected as a better producer of DA and improved by exposing with ethyl methylsulphonate (EMS). EMS-6 was selected as it exhibited 43 μg/mL DA activity. The mutant variable was further treated with low levels of l-cysteine HCl to make it resistant against diversion and environmental stress. The conidiospores of mutant variant were entrapped in calcium alginate beads for stable product formation. EMS-6 gave maximum DA activity (124 μg/mL) when supplemented with 0.1% peptone and 0.2% sucrose, under optimised parameters viz. pH 3, temperature of 55 °C and incubation time of 70 min. The study involves the high profile of DA activity and is needed, as DA is capable to control numerous neurogenic disorders.

  13. Cognitive Psychology and College-Level Pedagogy: Two Siblings that Rarely Communicate.

    ERIC Educational Resources Information Center

    Matlin, Margaret W.

    2002-01-01

    Following an introduction on the literature on cognitive psychology and pedagogy, provides an annotated bibliography listing several dozen resources that have explored how principles of cognitive psychology can be used to enhance college-level pedagogy. (EV)

  14. How the Second Law of Thermodynamics Has Informed Ecosystem Ecology through Its History

    NASA Astrophysics Data System (ADS)

    Chapman, E. J.; Childers, D. L.; Vallino, J. J.

    2014-12-01

    Throughout the history of ecosystem ecology many attempts have been made to develop a general principle governing how systems develop and organize. We reviewed the historical developments that led to conceptualization of several goal-oriented principles in ecosystem ecology and the relationships among them. We focused our review on two prominent principles—the Maximum Power Principle and the Maximum Entropy Production Principle—and the literature that applies to both. While these principles have considerable conceptual overlap and both use concepts in physics (power and entropy), we found considerable differences in their historical development, the disciplines that apply these principles, and their adoption in the literature. We reviewed the literature using Web of Science keyword searches for the MPP, the MEPP, as well as for papers that cited pioneers in the MPP and the MEPP development. From the 6000 papers that our keyword searches returned, we limited our further meta-analysis to 32 papers by focusing on studies with a foundation in ecosystems research. Despite these seemingly disparate pasts, we concluded that the conceptual approaches of these two principles were more similar than dissimilar and that maximization of power in ecosystems occurs with maximum entropy production. We also found that these two principles have great potential to explain how systems develop, organize, and function, but there are no widely agreed upon theoretical derivations for the MEPP or the MPP, possibly hindering their broader use in ecological research. We end with recommendations for how ecosystems-level studies may better use these principles.

  15. Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context

    NASA Astrophysics Data System (ADS)

    Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian

    2016-05-01

    The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.

  16. Integrated model development for liquid fueled rocket propulsion systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1993-01-01

    As detailed in the original statement of work, the objective of phase two of this research effort was to develop a general framework for rocket engine performance prediction that integrates physical principles, a rigorous mathematical formalism, component level test data, system level test data, and theory-observation reconciliation. Specific phase two development tasks are defined.

  17. What Difference Does Patient and Public Involvement Make and What Are Its Pathways to Impact? Qualitative Study of Patients and Researchers from a Cohort of Randomised Clinical Trials

    PubMed Central

    Dudley, Louise; Gamble, Carrol; Preston, Jennifer; Buck, Deborah; Hanley, Bec; Williamson, Paula; Young, Bridget

    2015-01-01

    Background Patient and public involvement (PPI) is advocated in clinical trials yet evidence on how to optimise its impact is limited. We explored researchers' and PPI contributors' accounts of the impact of PPI within trials and factors likely to influence its impact. Methods Semi-structured qualitative interviews with researchers and PPI contributors accessed through a cohort of randomised clinical trials. Analysis of transcripts of audio-recorded interviews was informed by the principles of the constant comparative method, elements of content analysis and informant triangulation. Results We interviewed 21 chief investigators, 10 trial managers and 17 PPI contributors from 28 trials. The accounts of informants within the same trials were largely in agreement. Over half the informants indicted PPI had made a difference within a trial, through contributions that influenced either an aspect of a trial, or how researchers thought about a trial. According to informants, the opportunity for PPI to make a difference was influenced by two main factors: whether chief investigators had goals and plans for PPI and the quality of the relationship between the research team and the PPI contributors. Early involvement of PPI contributors and including them in responsive (e.g. advisory groups) and managerial (e.g. trial management groups) roles were more likely to achieve impact compared to late involvement and oversight roles (e.g. trial steering committees). Conclusion Those seeking to enhance PPI in trials should develop goals for PPI at an early stage that fits the needs of the trial, plan PPI implementation in accordance with these goals, invest in developing good relationships between PPI contributors and researchers, and favour responsive and managerial roles for contributors in preference to oversight-only roles. These features could be used by research funders in judging PPI in trial grant applications and to inform policies to optimise PPI within trials. PMID:26053063

  18. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Cosmos caudatus as a potential source of polyphenolic compounds: optimisation of oven drying conditions and characterisation of its functional properties.

    PubMed

    Mediani, Ahmed; Abas, Faridah; Khatib, Alfi; Tan, Chin Ping

    2013-08-29

    The aim of the study was to analyze the influence of oven thermal processing of Cosmos caudatus on the total polyphenolic content (TPC) and antioxidant capacity (DPPH) of two different solvent extracts (80% methanol, and 80% ethanol). Sonication was used to extract bioactive compounds from this herb. The results showed that the optimised conditions for the oven drying method for 80% methanol and 80% ethanol were 44.5 °C for 4 h with an IC₅₀ of 0.045 mg/mL and 43.12 °C for 4.05 h with an IC₅₀ of 0.055 mg/mL, respectively. The predicted values for TPC under the optimised conditions for 80% methanol and 80% ethanol were 16.5 and 15.8 mg GAE/100 g DW, respectively. The results obtained from this study demonstrate that Cosmos caudatus can be used as a potential source of antioxidants for food and medicinal applications.

  20. On the search for design principles in biological systems.

    PubMed

    Poyatos, Juan F

    2012-01-01

    The search for basic concepts and underlying principles was at the core of the systems approach to science and technology. This approach was somehow abandoned in mainstream biology after its initial proposal, due to the rise and success of molecular biology. This situation has changed. The accumulated knowledge of decades of molecular studies in combination with new technological advances, while further highlighting the intricacies of natural systems, is also bringing back the quest-for-principles research program. Here, I present two lessons that I derived from my own quest: the importance of studying biological information processing to identify common principles in seemingly unrelated contexts and the adequacy of using known design principles at one level of biological organization as a valuable tool to help recognizing principles at an alternative one. These and additional lessons should contribute to the ultimate goal of establishing principles able to integrate the many scales of biological complexity.

  1. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  2. Conceptual principles of quality of life: an empirical exploration.

    PubMed

    Bramston, P; Chipuer, H; Pretty, G

    2005-10-01

    Quality of life is a popular measure of outcomes and its widespread use has led to recent calls for a better understanding of the construct, emphasizing the need to build a substantial body of knowledge around what determines perceptions of life quality. Three widely reported and used conceptual principles are examined in this study. Self-ratings of life quality and three likely determinants at an individual level (stress), an interactional level (social support) and a community level (neighbourhood belonging) were used. The study involved two groups of young adults from an urban community, one identified as having an intellectual disability (ID). RESULTS Young adults with ID rated their satisfaction with health significantly higher and intimacy and community involvement lower than the comparison group. Social support emerged as the strongest predictor of life satisfaction across both groups. The conceptual principles of subjective quality of life provide a useful framework to discuss findings and to stimulate further research.

  3. Exploiting temporal gradients of antibiotic concentration against the emergence of resistance

    NASA Astrophysics Data System (ADS)

    Bauer, Marianne; Ngampruetikorn, Vudtiwat; Frey, Erwin; Stephens, Greg

    A very simple model for antibiotic resistance - involving one normal and one more resistant species interacting indirectly through a carrying capacity - shows that the temporal variation of the antibiotic can affect the effect of the antibiotic. For a single antibiotic pulse, we find that for different minimal inhibitory concentrations of the two species an optimal pulse shape may exist, which increases the likelihood of bacterial extinction. For a long series of pulses, efficiency does not vary monotonically with the length of the gap between two individual pulses, but instead, the gap length can be optimised by exploiting the competition between the two species. Finally, a series of pulses is not always more efficient than a single pulse. Shorter pulses may be more efficient in an initial time window without risking population level resistance. We elucidate this behaviour with a phase diagram, and discuss the meaning of this work for current experiments. (equally contributing author).

  4. HEALTH TECHNOLOGY ASSESSMENT FOR DECISION MAKING IN LATIN AMERICA: GOOD PRACTICE PRINCIPLES.

    PubMed

    Pichon-Riviere, Andrés; Soto, Natalie C; Augustovski, Federico Ariel; García Martí, Sebastián; Sampietro-Colom, Laura

    2018-06-11

    The aim of this study was to identify good practice principles for health technology assessment (HTA) that are the most relevant and of highest priority for application in Latin America and to identify potential barriers to their implementation in the region. HTA good practice principles proposed at the international level were identified and then explored during a deliberative process in a forum of assessors, funders, and product manufacturers. Forty-two representatives from ten Latin American countries participated. Good practice principles proposed at the international level were considered valid and potentially relevant to Latin America. Five principles were identified as priority and with the greatest potential to be strengthened at this time: transparency in the production of HTA, involvement of relevant stakeholders in the HTA process, mechanisms to appeal decisions, clear priority-setting processes in HTA, and a clear link between HTA and decision making. The main challenge identified was to find a balance between the application of these principles and the available resources in a way that would not detract from the production of reports and adaptation to the needs of decision makers. The main recommendation was to progress gradually in strengthening HTA and its link to decision making by developing appropriate processes for each country, without trying to impose, in the short-term, standards taken from examples at the international level without adequate adaptation of these to local contexts.

  5. Comparing approaches for using climate projections in assessing water resources investments for systems with multiple stakeholder groups

    NASA Astrophysics Data System (ADS)

    Hurford, Anthony; Harou, Julien

    2015-04-01

    Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.

  6. Problem-Centered Supplemental Instruction in Biology: Influence on Content Recall, Content Understanding, and Problem Solving Ability

    NASA Astrophysics Data System (ADS)

    Gardner, Joel; Belland, Brian R.

    2017-08-01

    To address the need for effective, efficient ways to apply active learning in undergraduate biology courses, in this paper, we propose a problem-centered approach that utilizes supplemental web-based instructional materials based on principles of active learning. We compared two supplementary web-based modules using active learning strategies: the first used Merrill's First Principles of Instruction as a framework for organizing multiple active learning strategies; the second used a traditional web-based approach. Results indicated that (a) the First Principles group gained significantly from pretest to posttest at the Remember level ( t(40) = -1.432, p = 0.08, ES = 0.4) and at the Problem Solving level ( U = 142.5, N1 = 21, N2 = 21, p = .02, ES = 0.7) and (b) the Traditional group gained significantly from pretest to posttest at the Remember level ( t(36) = 1.762, p = 0.043, ES = 0.6). Those in the First Principles group were significantly more likely than the traditional group to be confident in their ability to solve problems in the future (χ2 (2, N = 40) = 3.585, p = 0.09).

  7. The measurement of principled morality by the Kohlberg Moral Dilemma Questionnaire.

    PubMed

    Heilbrun, A B; Georges, M

    1990-01-01

    The four stages preceding the postconventional level in the Kohlberg (1958, 1971, 1976) system of moral development are described as involving moral judgments that conform to external conditions of punishment, reward, social expectation, and conformity to the law. No special level of self-control seems necessary to behave in keeping with these conditions of external reinforcement. In contrast, the two stages of postconventional (principled) mortality involve defiance of majority opinion and defiance of the law--actions that would seem to require greater self-control. This study was concerned with whether postconventional moral reasoning, as measured by the Kohlberg Moral Dilemma Questionnaire (MDQ), can be associated with higher self-control. If so, prediction of principled moral behavior from the MDQ would be based not only on postconventional moral reasoning but bolstered by the necessary level of self-control as well. College students who came the closest to postconventional moral reasoning showed better self-control than college students who were more conventional or preconventional in their moral judgments. These results support the validity of the MDQ for predicting principled moral behavior.

  8. The canonical equation of adaptive dynamics for life histories: from fitness-returns to selection gradients and Pontryagin's maximum principle.

    PubMed

    Metz, Johan A Jacob; Staňková, Kateřina; Johansson, Jacob

    2016-03-01

    This paper should be read as addendum to Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013). Our goal is, using little more than high-school calculus, to (1) exhibit the form of the canonical equation of adaptive dynamics for classical life history problems, where the examples in Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013) are chosen such that they avoid a number of the problems that one gets in this most relevant of applications, (2) derive the fitness gradient occurring in the CE from simple fitness return arguments, (3) show explicitly that setting said fitness gradient equal to zero results in the classical marginal value principle from evolutionary ecology, (4) show that the latter in turn is equivalent to Pontryagin's maximum principle, a well known equivalence that however in the literature is given either ex cathedra or is proven with more advanced tools, (5) connect the classical optimisation arguments of life history theory a little better to real biology (Mendelian populations with separate sexes subject to an environmental feedback loop), (6) make a minor improvement to the form of the CE for the examples in Dieckmann et al. and Parvinen et al.

  9. Achieving Integration in Mixed Methods Designs—Principles and Practices

    PubMed Central

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

  10. Achieving integration in mixed methods designs-principles and practices.

    PubMed

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  11. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  12. Optimisation of colour stability of cured ham during packaging and retail display by a multifactorial design.

    PubMed

    Møller, Jens K S; Jakobsen, Marianne; Weber, Claus J; Martinussen, Torben; Skibsted, Leif H; Bertelsen, Grete

    2003-02-01

    A multifactorial design, including (1) percent residual oxygen, (2) oxygen transmission rate of packaging film (OTR), (3) product to headspace volume ratio, (4) illuminance level and (5) nitrite level during curing, was established to investigate factors affecting light-induced oxidative discoloration of cured ham (packaged in modified atmosphere of 20% carbon dioxide and balanced with nitrogen) during 14 days of chill storage. Univariate statistical analysis found significant effects of all main factors on the redness (tristimulus a-value) of the ham. Subsequently, Response Surface Modelling of the data further proved that the interactions between packaging and storage conditions are important when optimising colour stability. The measured content of oxygen in the headspace was incorporated in the model and the interaction between measured oxygen content in the headspace and the product to headspace volume ratio was found to be crucial. Thus, it is not enough to keep the headspace oxygen level low, if the headspace volume at the same time is large, there will still be sufficient oxygen for colour deteriorating processes to take place.

  13. Learning from evolutionary optimisation: what are toughening mechanisms good for in dentine, a nonrepairing bone tissue?

    PubMed

    Zaslansky, Paul; Currey, John D; Fleck, Claudia

    2016-09-12

    The main mass of material found in teeth is dentine, a bone-like tissue, riddled with micron-sized tubules and devoid of living cells. It provides support to the outer wear-resistant layer of enamel, and exhibits toughening mechanisms which contribute to crack resistance. And yet unlike most bone tissues, dentine does not remodel and consequently any accumulated damage does not 'self repair'. Because damage containment followed by tissue replacement is a prime reason for the crack-arresting microstructures found in most bones, the occurrence of toughening mechanisms without the biological capability to repair is puzzling. Here we consider the notion that dentine might be overdesigned for strength, because it has to compensate for the lack of cell-mediated healing mechanisms. Based on our own and on literature-reported observations, including quasistatic and fatigue properties, dentine design principles are discussed in light of the functional conditions under which teeth evolved. We conclude that dentine is only slightly overdesigned for everyday cyclic loading because usual mastication stresses may come close to its endurance strength. The in-built toughening mechanisms constitute an evolutionary benefit because they prevent catastrophic failure during rare overload events, which was probably very advantageous in our hunter gatherer ancestor times. From a bio-inspired perspective, understanding the extent of evolutionary overdesign might be useful for optimising biomimetic structures used for load bearing.

  14. Diatom cultivation and biotechnologically relevant products. Part II: current and putative products.

    PubMed

    Lebeau, T; Robert, J-M

    2003-02-01

    While diatoms are widely present in terms of diversity and abundance in nature, few species are currently used for biotechnologically applications. Most studies have focussed on intracellularly synthesised eicosapentaenoic acid (EPA), a polyunsaturated fatty acid (PUFA) used for pharmaceutical applications. Applications for other intracellular molecules, such as total lipids for biodiesel, amino acids for cosmetic, antibiotics and antiproliferative agents, are at the early stage of development. In addition, the active principle component must be identified amongst the many compounds of biotechnological interest. Biomass from diatom culture may be applied to: (1). aquaculture diets, due to the lipid- and amino-acid-rich cell contents of these microorganisms, and (2). the treatment of water contaminated by phosphorus and nitrogen in aquaculture effluent, or heavy metal (bioremediation). The most original application of microalgal biomass, and specifically diatoms, is the use of silicon derived from frustules in nanotechnology. The competitiveness of biotechnologically relevant products from diatoms will depend on their cost of production. Apart from EPA, which is less expensive when obtained from Phaeodactylum tricornutum than from cod liver, comparative economic studies of other diatom-derived products as well as optimisation of culture conditions are needed. Extraction of intracellular metabolites should be also optimised to reduce production costs, as has already been shown for EPA. Using cell immobilisation techniques, benthic diatoms can be cultivated more efficiently allowing new, biotechnologically relevant products to be investigated.

  15. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  16. An optimized low-power voltage controlled oscillator

    NASA Astrophysics Data System (ADS)

    Shah, Kriyang; Le, Hai Phuong; Singh, Jugdutt

    2007-01-01

    This paper presents an optimised low-power low-phase-noise Voltage Controlled Oscillator (VCO) for Bluetooth wireless applications. The system level design issues and tradeoffs related to Direct Conversion Receiver (DCR) and Low Intermediate Frequency (IF) architecture for Bluetooth are discussed. Subsequently, for a low IF architecture, the critical VCO performance parameters are derived from system specifications. The VCO presented in the paper is optimised by implementing a novel biasing circuit that employs two current mirrors, one at the top and the other one at the bottom of the cross-coupled complementary VCO, to give the exact replica of the current in both the arms of current mirror circuit. This approach, therefore, significantly reduces the system power consumption as well as improves the system performance. Results show that, the VCO consumes only 281μW of power at 2V supply. Its phase noise performance are -115dBc/Hz, -130dBc/Hz and -141dBc/Hz at the offset frequency of 1MHz, 3MHz and 5MHz respectively. Results indicate that 31% reduction in power consumption is achieved as compared to the traditional VCO design. These characteristics make the designed VCO a better candidate for Bluetooth wireless application where power consumption is the major issue.

  17. Cuttlebone-like V2O5 Nanofibre Scaffolds - Advances in Structuring Cellular Solids.

    PubMed

    Knöller, Andrea; Runčevski, Tomče; Dinnebier, Robert E; Bill, Joachim; Burghard, Zaklina

    2017-02-20

    The synthesis of ceramic materials combining high porosity and permeability with good mechanical stability is challenging, as optimising the latter requires compromises regarding the first two properties. Nonetheless, significant progress can be made in this direction by taking advantage of the structural design principles evolved by nature. Natural cellular solids achieve good mechanical stability via a defined hierarchical organisation of the building blocks they are composed of. Here, we report the first synthetic, ceramic-based scaffold whose architecture closely mimics that of cuttlebone -a structural biomaterial whose porosity exceeds that of most other natural cellular solids, whilst preserving an excellent mechanical strength. The nanostructured, single-component scaffold, obtained by ice-templated assembly of V 2 O 5 nanofibres, features a highly sophisticated and elaborate architecture of equally spaced lamellas, which are regularly connected by pillars as lamella support. It displays an unprecedented porosity of 99.8 %, complemented by an enhanced mechanical stability. This novel bioinspired, functional material not only displays mechanical characteristics similar to natural cuttlebone, but the multifunctionality of the V 2 O 5 nanofibres also renders possible applications, including catalysts, sensors and electrodes for energy storage.

  18. Cuttlebone-like V2O5 Nanofibre Scaffolds - Advances in Structuring Cellular Solids

    NASA Astrophysics Data System (ADS)

    Knöller, Andrea; Runčevski, Tomče; Dinnebier, Robert E.; Bill, Joachim; Burghard, Zaklina

    2017-02-01

    The synthesis of ceramic materials combining high porosity and permeability with good mechanical stability is challenging, as optimising the latter requires compromises regarding the first two properties. Nonetheless, significant progress can be made in this direction by taking advantage of the structural design principles evolved by nature. Natural cellular solids achieve good mechanical stability via a defined hierarchical organisation of the building blocks they are composed of. Here, we report the first synthetic, ceramic-based scaffold whose architecture closely mimics that of cuttlebone -a structural biomaterial whose porosity exceeds that of most other natural cellular solids, whilst preserving an excellent mechanical strength. The nanostructured, single-component scaffold, obtained by ice-templated assembly of V2O5 nanofibres, features a highly sophisticated and elaborate architecture of equally spaced lamellas, which are regularly connected by pillars as lamella support. It displays an unprecedented porosity of 99.8 %, complemented by an enhanced mechanical stability. This novel bioinspired, functional material not only displays mechanical characteristics similar to natural cuttlebone, but the multifunctionality of the V2O5 nanofibres also renders possible applications, including catalysts, sensors and electrodes for energy storage.

  19. Chemical study, antioxidant, anti-hypertensive, and cytotoxic/cytoprotective activities of Centaurea cyanus L. petals aqueous extract.

    PubMed

    Escher, Graziela Bragueto; Santos, Jânio Sousa; Rosso, Neiva Deliberali; Marques, Mariza Boscacci; Azevedo, Luciana; do Carmo, Mariana Araújo Vieira; Daguer, Heitor; Molognoni, Luciano; Prado-Silva, Leonardo do; Sant'Ana, Anderson S; da Silva, Marcia Cristina; Granato, Daniel

    2018-05-19

    This study aimed to optimise the experimental conditions of extraction of the phytochemical compounds and functional properties of Centaurea cyanus petals. The following parameters were determined: the chemical composition (LC-ESI-MS/MS), the effects of pH on the stability and antioxidant activity of anthocyanins, the inhibition of lipid peroxidation, antioxidant activity, anti-hemolytic activity, antimicrobial, anti-hypertensive, and cytotoxic/cytoprotective effect, and the measurements of intracellular reactive oxygen species. Results showed that the temperature and time influenced (p ≤ 0.05) the content of flavonoids, anthocyanins, and FRAP. Only the temperature influenced the total phenolic content, non-anthocyanin flavonoids, and antioxidant activity (DPPH). The statistical approach made it possible to obtain the optimised experimental extraction conditions to increase the level of bioactive compounds. Chlorogenic, caffeic, ferulic, and p-coumaric acids, isoquercitrin, and coumarin were identified as the major compounds in the optimised extract. The optimised extract presented anti-hemolytic and anti-hypertensive activity in vitro, in addition to showing stability and reversibility of anthocyanins and antioxidant activity with pH variation. The C. cyanus petals aqueous extract exhibited high IC 50 and GI 50 (>900 μg/mL) values for all cell lines, meaning low cytotoxicity. Based on the stress oxidative assay, the extract exhibited pro-oxidant action (10-100 μg/mL) but did not cause damage or cell death. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. The optimisation, design and verification of feed horn structures for future Cosmic Microwave Background missions

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten

    2016-05-01

    In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.

  1. Cost-effective control of plant disease when epidemiological knowledge is incomplete: modelling Bahia bark scaling of citrus.

    PubMed

    Cunniffe, Nik J; Laranjeira, Francisco F; Neri, Franco M; DeSimone, R Erik; Gilligan, Christopher A

    2014-08-01

    A spatially-explicit, stochastic model is developed for Bahia bark scaling, a threat to citrus production in north-eastern Brazil, and is used to assess epidemiological principles underlying the cost-effectiveness of disease control strategies. The model is fitted via Markov chain Monte Carlo with data augmentation to snapshots of disease spread derived from a previously-reported multi-year experiment. Goodness-of-fit tests strongly supported the fit of the model, even though the detailed etiology of the disease is unknown and was not explicitly included in the model. Key epidemiological parameters including the infection rate, incubation period and scale of dispersal are estimated from the spread data. This allows us to scale-up the experimental results to predict the effect of the level of initial inoculum on disease progression in a typically-sized citrus grove. The efficacies of two cultural control measures are assessed: altering the spacing of host plants, and roguing symptomatic trees. Reducing planting density can slow disease spread significantly if the distance between hosts is sufficiently large. However, low density groves have fewer plants per hectare. The optimum density of productive plants is therefore recovered at an intermediate host spacing. Roguing, even when detection of symptomatic plants is imperfect, can lead to very effective control. However, scouting for disease symptoms incurs a cost. We use the model to balance the cost of scouting against the number of plants lost to disease, and show how to determine a roguing schedule that optimises profit. The trade-offs underlying the two optima we identify-the optimal host spacing and the optimal roguing schedule-are applicable to many pathosystems. Our work demonstrates how a carefully parameterised mathematical model can be used to find these optima. It also illustrates how mathematical models can be used in even this most challenging of situations in which the underlying epidemiology is ill-understood.

  2. JHR Project: a future Material Testing Reactor working as an International user Facility: The key-role of instrumentation in support to the development of modern experimental capacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bignan, G.; Gonnier, C.; Lyoussi, A.

    2015-07-01

    Research and development on fuel and material behaviour under irradiation is a key issue for sustainable nuclear energy in order to meet specific needs by keeping the best level of safety. These needs mainly deal with a constant improvement of performances and safety in order to optimize the fuel cycle and hence to reach nuclear energy sustainable objectives. A sustainable nuclear energy requires a high level of performances in order to meet specific needs such as: - Pursuing improvement of the performances and safety of present and coming water cooled reactor technologies. This will require a continuous R and Dmore » support following a long-term trend driven by the plant life management, safety demonstration, flexibility and economics improvement. Experimental irradiations of structure materials are necessary to anticipate these material behaviours and will contribute to their optimisation. - Upgrading continuously nuclear fuel technology in present and future nuclear power plants to achieve better performances and to optimise the fuel cycle keeping the best level of safety. Fuel evolution for generation II, III and III+ is a key stake requiring developments, qualification tests and safety experiments to ensure the competitiveness and safety: experimental tests exploring the full range of fuel behaviour determine fuel stability limits and safety margins, as a major input for the fuel reliability analysis. To perform such accurate and innovative progress and developments, specific and ad hoc instrumentation, irradiation devices, measurement methods are necessary to be set up inside or beside the material testing reactor (MTR) core. These experiments require beforehand in situ and on line sophisticated measurements to accurately determine different key parameters such as thermal and fast neutron fluxes and nuclear heating in order to precisely monitor and control the conducted assays. The new Material Testing Reactor JHR (Jules Horowitz Reactor) currently under construction at CEA Cadarache research centre in the south of France will represent a major Research Infrastructure for scientific studies regarding material and fuel behavior under irradiation. It will also be devoted to medical isotopes production. Hence JHR will offer a real opportunity to perform R and D programs regarding needs above and hence will crucially contribute to the selection, optimization and qualification of these innovative materials and fuels. The JHR reactor objectives, principles and main characteristics associated to specific experimental devices associated to measurement techniques and methodology, their performances, their limitations and field of applications will be presented and discussed. (authors)« less

  3. Astroparticle and neutrino oscillation research with KM3NeT

    NASA Astrophysics Data System (ADS)

    Kulikovskiy, V.

    2017-05-01

    Two next generation underwater neutrino telescopes are under construction in the Mediterranean sea by the KM3NeT Collaboration. The first, ORCA, optimised for atmospheric neutrinos detection will be capable to determine the neutrino mass hierarchy with >3{σ} after three years of operation, i.e. as early as 2023. The second, ARCA, is optimised for high energy neutrino astronomy. Its location allows for surveying most of the Galactic Plane, including the Galactic Centre and the most promising source candidates. The neutrino diffuse emission flux measured by the IceCube Collaboration can be observed with 5{σ} in less than one year.

  4. Evaluation and optimisation of preparative semi-automated electrophoresis systems for Illumina library preparation.

    PubMed

    Quail, Michael A; Gu, Yong; Swerdlow, Harold; Mayho, Matthew

    2012-12-01

    Size selection can be a critical step in preparation of next-generation sequencing libraries. Traditional methods employing gel electrophoresis lack reproducibility, are labour intensive, do not scale well and employ hazardous interchelating dyes. In a high-throughput setting, solid-phase reversible immobilisation beads are commonly used for size-selection, but result in quite a broad fragment size range. We have evaluated and optimised the use of two semi-automated preparative DNA electrophoresis systems, the Caliper Labchip XT and the Sage Science Pippin Prep, for size selection of Illumina sequencing libraries. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. First on-sky results of a neural network based tomographic reconstructor: Carmen on Canary

    NASA Astrophysics Data System (ADS)

    Osborn, J.; Guzman, D.; de Cos Juez, F. J.; Basden, A. G.; Morris, T. J.; Gendron, É.; Butterley, T.; Myers, R. M.; Guesalaga, A.; Sanchez Lasheras, F.; Gomez Victoria, M.; Sánchez Rodríguez, M. L.; Gratadour, D.; Rousset, G.

    2014-07-01

    We present on-sky results obtained with Carmen, an artificial neural network tomographic reconstructor. It was tested during two nights in July 2013 on Canary, an AO demonstrator on the William Hershel Telescope. Carmen is trained during the day on the Canary calibration bench. This training regime ensures that Carmen is entirely flexible in terms of atmospheric turbulence profile, negating any need to re-optimise the reconstructor in changing atmospheric conditions. Carmen was run in short bursts, interlaced with an optimised Learn and Apply reconstructor. We found the performance of Carmen to be approximately 5% lower than that of Learn and Apply.

  6. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  7. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  8. Systems Biology of Recombinant Protein Production in Bacillus megaterium

    NASA Astrophysics Data System (ADS)

    Biedendieck, Rebekka; Bunk, Boyke; Fürch, Tobias; Franco-Lara, Ezequiel; Jahn, Martina; Jahn, Dieter

    Over the last two decades the Gram-positive bacterium Bacillus megaterium was systematically developed to a useful alternative protein production host. Multiple vector systems for high yield intra- and extracellular protein production were constructed. Strong inducible promoters were combined with DNA sequences for optimised ribosome binding sites, various leader peptides for protein export and N- as well as C-terminal affinity tags for affinity chromatographic purification of the desired protein. High cell density cultivation and recombinant protein production were successfully tested. For further system biology based control and optimisation of the production process the genomes of two B. megaterium strains were completely elucidated, DNA arrays designed, proteome, fluxome and metabolome analyses performed and all data integrated using the bioinformatics platform MEGABAC. Now, solid theoretical and experimental bases for primary modeling attempts of the production process are available.

  9. Heavy liquid metals: Research programs at PSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, Y.

    1996-06-01

    The author describes work at PSI on thermohydraulics, thermal shock, and material tests for mechnical properties. In the presentation, the focus is on two main programs. (1) SINQ LBE target: The phase II study program for SINQ is planned. A new LBE loop is being constructed. The study has the following three objectives: (a) Pump study - design work on an electromagnetic pump to be integrated into the target. (b) Heat pipe performance test - the use of heat pipes as an additional component of the target cooling system is being considered, and it may be a way to futhermore » decouple the liquid metal and water coolant loops. (c) Mixed convection experiment - in order to find an optimal configuration of the additional flow guide for window cooling, mixed convection around the window is to be studied. The experiment will be started using water and then with LBE. (2) ESS Mercury target: For ESS target study, the following experimental studies are planned, some of which are exampled by trial experiments. (a) Flow around the window: Flow mapping around the hemi-cylindrical window will be made for optimising the flow channels and structures, (b) Geometry optimisation for minimizing a recirculation zone behind the edge of the flow separator, (c) Flow induced vibration and buckling problem for a optimised structure of the flow separator and (d) Gas-liquid two-phase flow will be studied by starting to establish the new experimental method of measuring various kinds of two-phase flow characteristics.« less

  10. Expecting excellence is key to receivable success in small and rural hospitals.

    PubMed

    Boucher, D

    1990-06-01

    Although small and rural hospitals may have trouble hiring experienced billing and collection personnel, they can succeed in keeping down receivable levels and receivable days by adopting two principles for the business office: expect excellence and inspect for excellence. These principles include setting goals for business office staff as well as listening to and implementing their ideas for more effective operations.

  11. The development and optimisation of a primary care-based whole system complex intervention (CARE Plus) for patients with multimorbidity living in areas of high socioeconomic deprivation

    PubMed Central

    O'Brien, Rosaleen; Fitzpatrick, Bridie; Higgins, Maria; Guthrie, Bruce; Watt, Graham; Wyke, Sally

    2016-01-01

    Objectives To develop and optimise a primary care-based complex intervention (CARE Plus) to enhance the quality of life of patients with multimorbidity in the deprived areas. Methods Six co-design discussion groups involving 32 participants were held separately with multimorbid patients from the deprived areas, voluntary organisations, general practitioners and practice nurses working in the deprived areas. This was followed by piloting in two practices and further optimisation based on interviews with 11 general practitioners, 2 practice nurses and 6 participating multimorbid patients. Results Participants endorsed the need for longer consultations, relational continuity and a holistic approach. All felt that training and support of the health care staff was important. Most participants welcomed the idea of additional self-management support, though some practitioners were dubious about whether patients would use it. The pilot study led to changes including a revised care plan, the inclusion of mindfulness-based stress reduction techniques in the support of practitioners and patients, and the stream-lining of the written self-management support material for patients. Discussion We have co-designed and optimised an augmented primary care intervention involving a whole-system approach to enhance quality of life in multimorbid patients living in the deprived areas. CARE Plus will next be tested in a phase 2 cluster randomised controlled trial. PMID:27068113

  12. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  13. Basis for the development of sustainable optimisation indicators for activated sludge wastewater treatment plants in the Republic of Ireland.

    PubMed

    Gordon, G T; McCann, B P

    2015-01-01

    This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.

  14. Development and optimisation of atorvastatin calcium loaded self-nanoemulsifying drug delivery system (SNEDDS) for enhancing oral bioavailability: in vitro and in vivo evaluation.

    PubMed

    Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M

    2017-05-01

    The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.

  15. A domain specific language for performance portable molecular dynamics algorithms

    NASA Astrophysics Data System (ADS)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  16. Optimisation modelling to assess cost of dietary improvement in remote Aboriginal Australia.

    PubMed

    Brimblecombe, Julie; Ferguson, Megan; Liberato, Selma C; O'Dea, Kerin; Riley, Malcolm

    2013-01-01

    The cost and dietary choices required to fulfil nutrient recommendations defined nationally, need investigation, particularly for disadvantaged populations. We used optimisation modelling to examine the dietary change required to achieve nutrient requirements at minimum cost for an Aboriginal population in remote Australia, using where possible minimally-processed whole foods. A twelve month cross-section of population-level purchased food, food price and nutrient content data was used as the baseline. Relative amounts from 34 food group categories were varied to achieve specific energy and nutrient density goals at minimum cost while meeting model constraints intended to minimise deviation from the purchased diet. Simultaneous achievement of all nutrient goals was not feasible. The two most successful models (A & B) met all nutrient targets except sodium (146.2% and 148.9% of the respective target) and saturated fat (12.0% and 11.7% of energy). Model A was achieved with 3.2% lower cost than the baseline diet (which cost approximately AUD$13.01/person/day) and Model B at 7.8% lower cost but with a reduction in energy of 4.4%. Both models required very large reductions in sugar sweetened beverages (-90%) and refined cereals (-90%) and an approximate four-fold increase in vegetables, fruit, dairy foods, eggs, fish and seafood, and wholegrain cereals. This modelling approach suggested population level dietary recommendations at minimal cost based on the baseline purchased diet. Large shifts in diet in remote Aboriginal Australian populations are needed to achieve national nutrient targets. The modeling approach used was not able to meet all nutrient targets at less than current food expenditure.

  17. Assessing intern handover processes.

    PubMed

    Habicht, Robert; Block, Lauren; Silva, Kathryn Novello; Oliver, Nora; Wu, Albert; Feldman, Leonard

    2016-06-01

    New standards for resident work hours set in 2011 changed the landscape of patient care in teaching hospitals, and resulted in new challenges for US residency training programmes to overcome. One such challenge was a dramatic increase in the number of patient handovers performed by residents. As a result, there is a renewed focus for clinical teachers to develop educational strategies to optimise the patient handover process and improve the quality of patient care and safety. In order to investigate current gaps in resident handovers, we examined the handover processes performed by medicine interns at two academic medical centres in Baltimore, Maryland, USA. We used trained observers to collect data on whether handovers were conducted face to face, with questions asked, in private locations, with written documentation, and without distractions or interruptions. Results were analysed using chi-square tests, and adjusted for clustering at the observer and intern levels. Interns successfully conducted handovers face to face (99.5%), asked questions (85.3%), used private locations (91%), included written handover documentation (95.8%) and did not experience distractions for the majority of the time (87.7%); however, interruptions were pervasive, occurring 41.3 per cent of the time. In order to investigate current gaps in resident handovers, we examined the handover processes performed by medicine interns Interns conducted patient handovers face to face, with questions asked, in private locations, with written documentation and without distractions the majority of the time; however, interruptions during the handover process were common. Exploring gaps at the individual programme level is a critical first step to develop effective teaching strategies to optimise handovers in residency. © 2015 John Wiley & Sons Ltd.

  18. Optical cylinder designs to increase the field of vision in the osteo-odonto-keratoprosthesis.

    PubMed

    Hull, C C; Liu, C S; Sciscio, A; Eleftheriadis, H; Herold, J

    2000-12-01

    The single optical cylinders used in the osteo-odonto-keratoprosthesis (OOKP) are known to produce very small visual fields. Values of 40 degrees are typically quoted. The purpose of this paper is to present designs for new optical cylinders that significantly increase the field of view and therefore improve the visual rehabilitation of patients having an OOKP. Computer ray-tracing techniques were used to design and analyse improved one- and two-piece optical cylinders made from polymethyl methacrylate. All designs were required to have a potential visual acuity of 6/6 before consideration was given to the visual field and optimising off-axis image quality. Aspheric surfaces were used where this significantly improved off-axis image quality. Single optical cylinders, with increased posterior cylinder (intraocular) diameters, gave an increase in the theoretical visual field of 18% (from 76 degrees to 90 degrees) over current designs. Two-piece designs based on an inverted telephoto principle gave theoretical field angles over 120 degrees. Aspheric surfaces were shown to improve the off-axis image quality while maintaining a potential visual acuity of at least 6/6. This may well increase the measured visual field by improving the retinal illuminance off-axis. Results demonstrate that it is possible to significantly increase the theoretical maximum visual field through OOKP optical cylinders. Such designs will improve the visual rehabilitation of patients undergoing this procedure.

  19. Optimal Control for Quantum Driving of Two-Level Systems

    NASA Astrophysics Data System (ADS)

    Qi, Xiao-Qiu

    2018-01-01

    In this paper, the optimal quantum control of two-level systems is studied by the decompositions of SU(2). Using the Pontryagin maximum principle, the minimum time of quantum control is analyzed in detail. The solution scheme of the optimal control function is given in the general case. Finally, two specific cases, which can be applied in many quantum systems, are used to illustrate the scheme, while the corresponding optimal control functions are obtained.

  20. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    NASA Astrophysics Data System (ADS)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  1. Design Optimisation of a Magnetic Field Based Soft Tactile Sensor

    PubMed Central

    Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert

    2017-01-01

    This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787

  2. Dietary changes needed to reach nutritional adequacy without increasing diet cost according to income: An analysis among French adults.

    PubMed

    Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole

    2017-01-01

    To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006-2007. Diet cost was estimated from mean national food prices (2006-2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d.

  3. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  4. Response surface optimization of extraction protocols to obtain phenolic rich antioxidant from sea buckthorn and their potential application into model meat system.

    PubMed

    Wagh, Rajesh V; Chatli, Manish K

    2017-05-01

    In the present study, processing parameters for the extraction of phenolic rich sea buckthorn seed (SBTE) extract were optimised using response surface method and subjected for in vitro efficacy viz. total phenolic, ABTS, DPPH and SASA activity. The optimised model depicted MeOH as a solvent at 60% concentration level with a reaction time of 20 min and extracting temperature of 55 °C for the highest yield and total phenolic content. The efficacy of different concentration of obtained SBT was evaluated in raw ground pork as a model meat system on the basis of various physico-chemical, microbiological, sensory quality characteristics. Addition of 0.3% SBTE significantly reduced the lipid peroxidation (PV, TBARS and FFA) and improved instrumental colour ( L* , a*, b* ) attributes of raw ground pork during refrigerated storage of 9 days. Results concluded that SBTE at 0.3% level can successfully improve the oxidative stability, microbial, sensory quality attributes in the meat model system.

  5. Preparation and evaluation of a controlled drug release of repaglinide through matrix pellets: in vitro and in vivo studies.

    PubMed

    Tavakoli, Naser; Minaiyan, Mohsen; Tabbakhian, Majid; Pendar, Yaqub

    2014-01-01

    Repaglinide, an oral antidiabetic agent, has a rapid onset of action and short half-life of approximately 1 h. Designing a controlled release dosage form of the drug is required to maintain its therapeutic blood level and to eliminate its adverse effects, particularly the hypoglycaemia. Repaglinide sustained release matrix pellets consisting of Avicel, lactose and different polymers were prepared using extrusion-spheronisation method. The effect of different formulation components on in vitro drug release were evaluated using USP apparatus (paddle) for 12 h in phosphate buffer. The optimised formulation was orally administrated to normal and STZ induced diabetic rats. Most pellet formulations had acceptable physical properties with regard to size distribution, flowability and friability. Repaglinide pellets comprising Avicel 50%, lactose 47% and SLS 1% were released 94% of its drug content after 12 h. The optimised formulation was able to decrease blood glucose level in normal rats and those with diabetes throughout 8-12 h.

  6. Disease activity-guided dose optimisation of adalimumab and etanercept is a cost-effective strategy compared with non-tapering tight control rheumatoid arthritis care: analyses of the DRESS study.

    PubMed

    Kievit, Wietske; van Herwaarden, Noortje; van den Hoogen, Frank Hj; van Vollenhoven, Ronald F; Bijlsma, Johannes Wj; van den Bemt, Bart Jf; van der Maas, Aatke; den Broeder, Alfons A

    2016-11-01

    A disease activity-guided dose optimisation strategy of adalimumab or etanercept (TNFi (tumour necrosis factor inhibitors)) has shown to be non-inferior in maintaining disease control in patients with rheumatoid arthritis (RA) compared with usual care. However, the cost-effectiveness of this strategy is still unknown. This is a preplanned cost-effectiveness analysis of the Dose REduction Strategy of Subcutaneous TNF inhibitors (DRESS) study, a randomised controlled, open-label, non-inferiority trial performed in two Dutch rheumatology outpatient clinics. Patients with low disease activity using TNF inhibitors were included. Total healthcare costs were measured and quality adjusted life years (QALY) were based on EQ5D utility scores. Decremental cost-effectiveness analyses were performed using bootstrap analyses; incremental net monetary benefit (iNMB) was used to express cost-effectiveness. 180 patients were included, and 121 were allocated to the dose optimisation strategy and 59 to control. The dose optimisation strategy resulted in a mean cost saving of -€12 280 (95 percentile -€10 502; -€14 104) per patient per 18 months. There is an 84% chance that the dose optimisation strategy results in a QALY loss with a mean QALY loss of -0.02 (-0.07 to 0.02). The decremental cost-effectiveness ratio (DCER) was €390 493 (€5 085 184; dominant) of savings per QALY lost. The mean iNMB was €10 467 (€6553-€14 037). Sensitivity analyses using 30% and 50% lower prices for TNFi remained cost-effective. Disease activity-guided dose optimisation of TNFi results in considerable cost savings while no relevant loss of quality of life was observed. When the minimal QALY loss is compensated with the upper limit of what society is willing to pay or accept in the Netherlands, the net savings are still high. NTR3216; Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Optimisation of an oak chips-grape mix maceration process. Influence of chip dose and maceration time.

    PubMed

    Gordillo, Belén; Baca-Bocanegra, Berta; Rodriguez-Pulído, Francisco J; González-Miret, M Lourdes; García Estévez, Ignacio; Quijada-Morín, Natalia; Heredia, Francisco J; Escribano-Bailón, M Teresa

    2016-09-01

    Oak chips-related phenolics are able to modify the composition of red wine and modulate the colour stability. In this study, the effect of two maceration techniques, traditional and oak chips-grape mix process, on the phenolic composition and colour of Syrah red wines from warm climate was studied. Two doses of oak chips (3 and 6g/L) at two maceration times (5 and 10days) during fermentation was considered. Changes on phenolic composition (HPLC-DAD-MS), copigmentation/polymerisation (spectrophotometry), and colour (Tristimulus and Differential Colorimetry) were assessed by multivariate statistical techniques. The addition of oak chips at shorter maceration times enhanced phenolic extraction, colour and its stabilisation in comparison to the traditional maceration. On contrast, increasing chip dose in extended maceration time resulted in wines with lighter and less stable colour. Results open the possibility of optimise alternative technological applications to traditional grape maceration for avoiding the common loss of colour of wines from warm climate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  9. Relationships between patient size, dose and image noise under automatic tube current modulation systems.

    PubMed

    Sookpeng, S; Martin, C J; Gentle, D J; Lopez-Gonzalez, M R

    2014-03-01

    Automatic tube current modulation (ATCM) systems are now used for the majority of CT scans. The principles of ATCM operation are different in CT scanners from different manufacturers. Toshiba and GE scanners base the current modulation on a target noise setting, while Philips and Siemens scanners use reference image and reference mAs concepts respectively. Knowledge of the relationships between patient size, dose and image noise are important for CT patient dose optimisation. In this study, the CT patient doses were surveyed for 14 CT scanners from four different CT scanner manufacturers. The patient cross sectional area, the tube current modulation and the image noise from the CT images were analysed using in-house software. The Toshiba and GE scanner results showed that noise levels are relatively constant but tube currents are dependent on patient size. As a result of this there is a wide range in tube current values across different patient sizes, and doses for large patients are significantly higher in these scanners. In contrast, in the Philips and Siemens scanners, tube currents are less dependent on patient size, the range in tube current is narrower, and the doses for larger patients are not as high. Image noise is more dependent on the patient size.

  10. Prediction of the thermophysical properties of molten salt fast reactor fuel from first-principles

    NASA Astrophysics Data System (ADS)

    Gheribi, A. E.; Corradini, D.; Dewan, L.; Chartrand, P.; Simon, C.; Madden, P. A.; Salanne, M.

    2014-05-01

    Molten fluorides are known to show favourable thermophysical properties which make them good candidate coolants for nuclear fission reactors. Here we investigate the special case of mixtures of lithium fluoride and thorium fluoride, which act both as coolant and as fuel in the molten salt fast reactor concept. By using ab initio parameterised polarisable force fields, we show that it is possible to calculate the whole set of properties (density, thermal expansion, heat capacity, viscosity and thermal conductivity) which are necessary for assessing the heat transfer performance of the melt over the whole range of compositions and temperatures. We then deduce from our calculations several figures of merit which are important in helping the optimisation of the design of molten salt fast reactors.

  11. The impact behaviour of silk cocoons.

    PubMed

    Chen, Fujia; Hesselberg, Thomas; Porter, David; Vollrath, Fritz

    2013-07-15

    Silk cocoons, constructed by silkmoths (Lepidoptera), are protective structural composites. Some cocoons appear to have evolved towards structural and material optimisation in order to sustain impact strikes from predators and hinder parasite ingress. This study investigates the protective properties of silk cocoons with different morphologies by evaluating their impact resistance and damage tolerance. Finite element analysis was used to analyse empirical observations of the quasi-static impact response of the silk cocoons, and to evaluate the separate benefits of the structures and materials through the deformation and damage mechanism. We use design principles from composite engineering in order to understand the structure-property-function relationship of silkworm cocoons. Understanding the highly evolved survival strategies of the organisms building natural cocoons will hopefully lead to inspiration that in turn could lead to improved composite design.

  12. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.

  13. Modelling and optimising of physicochemical features of walnut-oil beverage emulsions by implementation of response surface methodology: effect of preparation conditions on emulsion stability.

    PubMed

    Homayoonfal, Mina; Khodaiyan, Faramarz; Mousavi, Mohammad

    2015-05-01

    The major purpose of this study is to apply response surface methodology to model and optimise processing conditions for the preparation of beverage emulsions with maximum emulsion stability and viscosity, minimum particle size, turbidity loss rate, size index and peroxide value changes. A three-factor, five-level central composite design was conducted to estimate the effects of three independent variables: ultrasonic time (UT, 5-15 min), walnut-oil content (WO, 4-10% (w/w)) and Span 80 content (S80, 0.55-0.8). The results demonstrated the empirical models were satisfactorily (p < 0.0001) fitted to the experimental data. Evaluation of responses by analysis of variance indicated high coefficient determination values. The overall optimisation of preparation conditions was an UT of 14.630 min, WO content of 8.238% (w/w), and S80 content of 0.782% (w/w). Under this optimum region, responses were found to be 219.198, 99.184, 0.008, 0.008, 2.43 and 16.65 for particle size, emulsion stability, turbidity loss rate, size index, viscosity and peroxide value changes, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Prediction of road traffic death rate using neural networks optimised by genetic algorithm.

    PubMed

    Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari

    2015-01-01

    Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.

  15. Development and validation of a rapid capillary zone electrophoresis method for the determination of aconite alkaloids in aconite roots.

    PubMed

    Song, Jing-Zheng; Han, Quan-Bin; Qiao, Chun-Feng; But, Paul Pui-Hay; Xu, Hong-Xi

    2010-01-01

    Aconites, with aconite alkaloids as the major therapeutic and toxic components, are used for the treatment of analgesic, antirheumatic and neurological symptoms. Quantification of the aconite alkaloids is important for the quality control of aconite-containing drugs. To establish a validated capillary zone electrophoresis (CZE) method for the simultaneous determination of six major alkaloids, namely aconitine, mesaconitine, hypaconitine, benzoylaconine, benzoylmesaconine and benzoylhypaconine, in crude and processed aconite roots. The CZE method was optimised and validated using a stability-indicating method. The optimised running buffer was a mixture of 200 mm Tris, 150 mm perchloric acid and 40% 1,4-dioxane (pH 7.8) with the capillary thermostated at 25 degrees C. Using the optimised method, six aconite alkaloids were well separated. The established method showed good precision, accuracy and recovery. Contents of these alkaloids in crude and processed aconites were determined and it was observed that the levels of individual alkaloids varied between samples. The developed CZE method was reliable for the quality control of aconites contained in herbal medicines. The method could also be used as an approach for toxicological studies.

  16. Formulation of multiparticulate systems as lyophilised orally disintegrating tablets.

    PubMed

    Alhusban, Farhan; Perrie, Yvonne; Mohammed, Afzal R

    2011-11-01

    The current study aimed to exploit the electrostatic associative interaction between carrageenan and gelatin to optimise a formulation of lyophilised orally disintegrating tablets (ODTs) suitable for multiparticulate delivery. A central composite face centred (CCF) design was applied to study the influence of formulation variables (gelatin, carrageenan and alanine concentrations) on the crucial responses of the formulation (disintegration time, hardness, viscosity and pH). The disintegration time and viscosity were controlled by the associative interaction between gelatin and carrageenan upon hydration which forms a strong complex that increases the viscosity of the stock solution and forms tablet with higher resistant to disintegration in aqueous medium. Therefore, the levels of carrageenan, gelatin and their interaction in the formulation were the significant factors. In terms of hardness, increasing gelatin and alanine concentration was the most effective way to improve tablet hardness. Accordingly, optimum concentrations of these excipients were needed to find the best balance that fulfilled all formulation requirements. The revised model showed high degree of predictability and optimisation reliability and therefore was successful in developing an ODT formulation with optimised properties that were able deliver enteric coated multiparticulates of omeprazole without compromising their functionality. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Implementing large-scale programmes to optimise the health workforce in low- and middle-income settings: a multicountry case study synthesis.

    PubMed

    Gopinathan, Unni; Lewin, Simon; Glenton, Claire

    2014-12-01

    To identify factors affecting the implementation of large-scale programmes to optimise the health workforce in low- and middle-income countries. We conducted a multicountry case study synthesis. Eligible programmes were identified through consultation with experts and using Internet searches. Programmes were selected purposively to match the inclusion criteria. Programme documents were gathered via Google Scholar and PubMed and from key informants. The SURE Framework - a comprehensive list of factors that may influence the implementation of health system interventions - was used to organise the data. Thematic analysis was used to identify the key issues that emerged from the case studies. Programmes from Brazil, Ethiopia, India, Iran, Malawi, Venezuela and Zimbabwe were selected. Key system-level factors affecting the implementation of the programmes were related to health worker training and continuing education, management and programme support structures, the organisation and delivery of services, community participation, and the sociopolitical environment. Existing weaknesses in health systems may undermine the implementation of large-scale programmes to optimise the health workforce. Changes in the roles and responsibilities of cadres may also, in turn, impact the health system throughout. © 2014 John Wiley & Sons Ltd.

  18. Economic impact of optimising antiretroviral treatment in human immunodeficiency virus-infected adults with suppressed viral load in Spain, by implementing the grade A-1 evidence recommendations of the 2015 GESIDA/National AIDS Plan.

    PubMed

    Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz

    2018-03-01

    The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  19. Structure zone diagram and particle incorporation of nickel brush plated composite coatings

    PubMed Central

    Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.

    2017-01-01

    This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%. PMID:28300159

  20. Speckle-based at-wavelength metrology of X-ray mirrors with super accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Yogesh; Wang, Hongchang; Sawhney, Kawal, E-mail: kawal.sawhney@diamond.ac.uk

    2016-05-15

    X-ray active mirrors, such as bimorph and mechanically bendable mirrors, are increasingly being used on beamlines at modern synchrotron source facilities to generate either focused or “tophat” beams. As well as optical tests in the metrology lab, it is becoming increasingly important to optimise and characterise active optics under actual beamline operating conditions. Recently developed X-ray speckle-based at-wavelength metrology technique has shown great potential. The technique has been established and further developed at the Diamond Light Source and is increasingly being used to optimise active mirrors. Details of the X-ray speckle-based at-wavelength metrology technique and an example of its applicabilitymore » in characterising and optimising a micro-focusing bimorph X-ray mirror are presented. Importantly, an unprecedented angular sensitivity in the range of two nanoradians for measuring the slope error of an optical surface has been demonstrated. Such a super precision metrology technique will be beneficial to the manufacturers of polished mirrors and also in optimization of beam shaping during experiments.« less

  1. Structure zone diagram and particle incorporation of nickel brush plated composite coatings

    NASA Astrophysics Data System (ADS)

    Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.

    2017-03-01

    This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.

  2. Variability estimation of urban wastewater biodegradable fractions by respirometry.

    PubMed

    Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie

    2005-11-01

    This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.

  3. Optimisation of flavour ester biosynthesis in an aqueous system of coconut cream and fusel oil catalysed by lipase.

    PubMed

    Sun, Jingcan; Yu, Bin; Curran, Philip; Liu, Shao-Quan

    2012-12-15

    Coconut cream and fusel oil, two low-cost natural substances, were used as starting materials for the biosynthesis of flavour-active octanoic acid esters (ethyl-, butyl-, isobutyl- and (iso)amyl octanoate) using lipase Palatase as the biocatalyst. The Taguchi design method was used for the first time to optimize the biosynthesis of esters by a lipase in an aqueous system of coconut cream and fusel oil. Temperature, time and enzyme amount were found to be statistically significant factors and the optimal conditions were determined to be as follows: temperature 30°C, fusel oil concentration 9% (v/w), reaction time 24h, pH 6.2 and enzyme amount 0.26 g. Under the optimised conditions, a yield of 14.25mg/g (based on cream weight) and signal-to-noise (S/N) ratio of 23.07 dB were obtained. The results indicate that the Taguchi design method was an efficient and systematic approach to the optimisation of lipase-catalysed biological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. A catalyst layer optimisation approach using electrochemical impedance spectroscopy for PEM fuel cells operated with pyrolysed transition metal-N-C catalysts

    NASA Astrophysics Data System (ADS)

    Malko, Daniel; Lopes, Thiago; Ticianelli, Edson A.; Kucernak, Anthony

    2016-08-01

    The effect of the ionomer to carbon (I/C) ratio on the performance of single cell polymer electrolyte fuel cells is investigated for three different types of non-precious metal cathodic catalysts. Polarisation curves as well as impedance spectra are recorded at different potentials in the presence of argon or oxygen at the cathode and hydrogen at the anode. It is found that a optimised ionomer content is a key factor for improving the performance of the catalyst. Non-optimal ionomer loading can be assessed by two different factors from the impedance spectra. Hence this observation could be used as a diagnostic element to determine the ideal ionomer content and distribution in newly developed catalyst-electrodes. An electrode morphology based on the presence of inhomogeneous resistance distribution within the porous structure is suggested to explain the observed phenomena. The back-pressure and relative humidity effect on this feature is also investigated and supports the above hypothesis. We give a simple flowchart to aid optimisation of electrodes with the minimum number of trials.

  5. Structure zone diagram and particle incorporation of nickel brush plated composite coatings.

    PubMed

    Isern, L; Impey, S; Almond, H; Clouser, S J; Endrino, J L

    2017-03-16

    This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.

  6. Career Paths and Organizational Development: Expanding Alliances.

    ERIC Educational Resources Information Center

    Bernes, K. B.; Magnusson, K. C.

    The Synergistic Model of Organizational Career Development is an attempt to combine best practice principles from two domains: organizational development and individual career planning. The model assumes three levels of intervention within an organization: philosophical, strategic, and practical. Interventions at any of the levels may be directed…

  7. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    PubMed

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Evolution in students' understanding of thermal physics with increasing complexity

    NASA Astrophysics Data System (ADS)

    Langbeheim, Elon; Safran, Samuel A.; Livne, Shelly; Yerushalmi, Edit

    2013-12-01

    We analyze the development in students’ understanding of fundamental principles in the context of learning a current interdisciplinary research topic—soft matter—that was adapted to the level of high school students. The topic was introduced in a program for interested 11th grade high school students majoring in chemistry and/or physics, in an off-school setting. Soft matter was presented in a gradual increase in the degree of complexity of the phenomena as well as in the level of the quantitative analysis. We describe the evolution in students’ use of fundamental thermodynamics principles to reason about phase separation—a phenomenon that is ubiquitous in soft matter. In particular, we examine the impact of the use of free energy analysis, a common approach in soft matter, on the understanding of the fundamental principles of thermodynamics. The study used diagnostic questions and classroom observations to gauge the student’s learning. In order to gain insight on the aspects that shape the understanding of the basic principles, we focus on the responses and explanations of two case-study students who represent two trends of evolution in conceptual understanding in the group. We analyze changes in the two case studies’ management of conceptual resources used in their analysis of phase separation, and suggest how their prior knowledge and epistemological framing (a combination of their personal tendencies and their prior exposure to different learning styles) affect their conceptual evolution. Finally, we propose strategies to improve the instruction of these concepts.

  9. The N-of-1 Clinical Trial: A Timely Research Opportunity in Homeopathy.

    PubMed

    Ulbrich-Zürni, Susanne; Teut, Michael; Roll, Stephanie; Mathie, Robert T

    2018-02-01

     The randomised controlled trial (RCT) is considered the 'gold standard' for establishing treatment efficacy or effectiveness of an intervention, but its data do not infer response in an individual patient. Individualised clinical care, a fundamental principle in complementary and alternative medicine (CAM), including homeopathy, seems well disposed in principle to being researched by single-patient (N-of-1) study design. Guidelines for reporting N-of-1 trials have recently been developed.  To overview the current status in the literature of the N-of-1 method and its application in medicine, including CAM. To consider whether the N-of-1 trial design offers an opportunity for novel research in homeopathy. N-OF-1 TRIAL DESIGN:  The N-of-1 trial applies the principles of the conventional crossover, blinded, RCT design. The treatment under study and the comparator are repeated in a randomised order, and with suitable washout time, over a defined period. N-of-1 design is constrained for use in chronic stable conditions, and for interventions that have quick onset and cessation of effect, with modest or negligible carryover. Outcome data can be aggregated and interpreted for the individual subject; they can also be pooled with data from several similar N-of-1 trials, enabling more generalisable conclusions. THE N-OF-1 TRIAL IN CAM: The typical individualisation of patient care can be accommodated in N-of-1 study design if the patient and the specific therapeutic intervention are selected within the constraints of the method. Application of the N-of-1 method in CAM has been advocated but has been mainly limited, in practice, to a small number of studies in herbal and traditional Chinese medicine. THE N-OF-1 TRIAL IN HOMEOPATHY:  Individualised homeopathy can be accommodated for investigation within the same methodological constraints; less in-depth homeopathic approaches to prescribing are also amendable to investigation using the N-of-1 method. No such studies have been published. We identify three main targets in its ready applicability to homeopathy: (1) to optimise clinical care in an individual patient; (2) to investigate whether the outcomes of treatment using homeopathy differ from those of placebo; (3) to aggregate data from a series of N-of-1 trials to enable broader conclusions about a group of patients or intervention.  The N-of-1 trial design offers important new investigative possibilities in homeopathy and should be explored as a means to optimise individualised health care or investigate effectiveness of the homeopathic intervention compared with placebo in individual subjects. The Faculty of Homeopathy.

  10. Optimisation of a cheap and residential small-scale production of edible crickets with local by-products as an alternative protein-rich human food source in Ratanakiri Province, Cambodia.

    PubMed

    Caparros Megido, Rudy; Alabi, Taofic; Nieus, Clément; Blecker, Christophe; Danthine, Sabine; Bogaert, Jan; Haubruge, Éric; Francis, Frédéric

    2016-01-30

    The health status of the indigenous people of Ratanakiri Province, Cambodia, is significantly lower compared with that of the rest of the nation. The domestication and mass production of insects may represent a sustainable, cost effective and high quality alternative source of protein to traditional livestock. This study aimed to optimise a cheap and residential cricket breeding system based on unused wild resources. The development of crickets, Teleogryllus testaceus (Walker), under seven diets composed of taro aerial parts, young cassava leaves, young cashew leaves and brown rice flour (with or without banana slices), versus a traditionally used broiler feed diet was studied. Cricket mortality was low in all diets, except the two cashew-based diets. Total biomass was significantly higher under the broiler feed, in addition to the two diets containing a combination of cassava leaf powder and brown rice. Yet, crickets fed with the taro diet had the highest percentage of protein. Concerning the breeding system cost, units using cassava leaves were the cheapest. Diets based on cassava leaves seem to be the most promising. Nevertheless, to produce crickets with a high body mass and a high protein level, a new experiments must be devised in which cassava leaf maturity will be adapted to fit with the cricket growth stage. Moreover, to reduce the cost of the breeding units, handmade local products should be used instead of purchased components. © 2015 Society of Chemical Industry.

  11. On-farm animal welfare assessment in beef bulls: consistency over time of single measures and aggregated Welfare Quality(®) scores.

    PubMed

    Kirchner, M K; Schulze Westerath, H; Knierim, U; Tessitore, E; Cozzi, G; Winckler, C

    2014-03-01

    Consistency over time of (on-farm) animal welfare assessment systems forms part of reliability, meaning that results of the assessment should be representative of the longer-term welfare state of the farm as long as the housing and management conditions have not changed considerably. This is especially important if assessments are to be used for certification purposes. It was the aim of the present study to investigate consistency over time of the Welfare Quality(®) (WQ(®)) assessment system for fattening cattle at single measure level, aggregated criterion and principle scores, and overall classification across short-term (1 month) and longer-term periods (6 months). We hypothesized that consistency over time of aggregated criterion and principle scores is higher than that of single measures. Consistency was also expected to be lower with longer intervals between assessments. Data were obtained using the WQ(®) protocol for fattening cattle during three visits (months 0, 1 and 7) on 63 beef farms in Austria, Germany and Italy. Only data from farms where no major changes in housing and management had taken place were considered for analysis. At the single measure level, Spearman rank correlations between visits were >0.7 and variance was lower within farms than between farms for six and two of 19 measures after 1 month and 6 months, respectively. After aggregation of single measures into criterion and principle scores, five and two of 10 criteria and three and one of four principles were found reliable after 1 and 6 months, respectively. At the WQ(®) principle level, this was the case for three and one of four principles. Seventy-nine per cent and 75% of the farms were allocated to the same overall welfare category after 1 month and 6 months. Possible reasons for a lack of consistency are seasonal effects or short-term fluctuations that occur under normal farm conditions, low prevalence of clinical measures and probably insufficient sample size, whereas poor inter-observer agreement leading to inflation of correlation can be ruled out. At the criterion and principle level, aggregation of information into scores appears to partly smoothen undirected variation at the single measure level without losing sensitivity in terms of welfare evaluation. Reliable on-farm animal welfare assessments should therefore be based on repeated assessments. Further long-term studies are recommended to better understand the factors influencing consistency over time.

  12. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  13. Overview of the JET results in support to ITER

    NASA Astrophysics Data System (ADS)

    Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors

    2017-10-01

    The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H  =  1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.

  14. Blind column selection protocol for two-dimensional high performance liquid chromatography.

    PubMed

    Burns, Niki K; Andrighetto, Luke M; Conlan, Xavier A; Purcell, Stuart D; Barnett, Neil W; Denning, Jacquie; Francis, Paul S; Stevenson, Paul G

    2016-07-01

    The selection of two orthogonal columns for two-dimensional high performance liquid chromatography (LC×LC) separation of natural product extracts can be a labour intensive and time consuming process and in many cases is an entirely trial-and-error approach. This paper introduces a blind optimisation method for column selection of a black box of constituent components. A data processing pipeline, created in the open source application OpenMS®, was developed to map the components within the mixture of equal mass across a library of HPLC columns; LC×LC separation space utilisation was compared by measuring the fractional surface coverage, fcoverage. It was found that for a test mixture from an opium poppy (Papaver somniferum) extract, the combination of diphenyl and C18 stationary phases provided a predicted fcoverage of 0.48 and was matched with an actual usage of 0.43. OpenMS®, in conjunction with algorithms designed in house, have allowed for a significantly quicker selection of two orthogonal columns, which have been optimised for a LC×LC separation of crude extractions of plant material. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Optimal control of Formula One car energy recovery systems

    NASA Astrophysics Data System (ADS)

    Limebeer, D. J. N.; Perantoni, G.; Rao, A. V.

    2014-10-01

    The utility of orthogonal collocation methods in the solution of optimal control problems relating to Formula One racing is demonstrated. These methods can be used to optimise driver controls such as the steering, braking and throttle usage, and to optimise vehicle parameters such as the aerodynamic down force and mass distributions. Of particular interest is the optimal usage of energy recovery systems (ERSs). Contemporary kinetic energy recovery systems are studied and compared with future hybrid kinetic and thermal/heat ERSs known as ERS-K and ERS-H, respectively. It is demonstrated that these systems, when properly controlled, can produce contemporary lap time using approximately two-thirds of the fuel required by earlier generation (2013 and prior) vehicles.

  16. A novel sleep optimisation programme to improve athletes' well-being and performance.

    PubMed

    Van Ryswyk, Emer; Weeks, Richard; Bandick, Laura; O'Keefe, Michaela; Vakulin, Andrew; Catcheside, Peter; Barger, Laura; Potter, Andrew; Poulos, Nick; Wallace, Jarryd; Antic, Nick A

    2017-03-01

    To improve well-being and performance indicators in a group of Australian Football League (AFL) players via a six-week sleep optimisation programme. Prospective intervention study following observations suggestive of reduced sleep and excessive daytime sleepiness in an AFL group. Athletes from the Adelaide Football Club were invited to participate if they had played AFL senior-level football for 1-5 years, or if they had excessive daytime sleepiness (Epworth Sleepiness Scale [ESS] >10), measured via ESS. An initial education session explained normal sleep needs, and how to achieve increased sleep duration and quality. Participants (n = 25) received ongoing feedback on their sleep, and a mid-programme education and feedback session. Sleep duration, quality and related outcomes were measured during week one and at the conclusion of the six-week intervention period using sleep diaries, actigraphy, ESS, Pittsburgh Sleep Quality Index, Profile of Mood States, Training Distress Scale, Perceived Stress Scale and the Psychomotor Vigilance Task. Sleep diaries demonstrated an increase in total sleep time of approximately 20 min (498.8 ± 53.8 to 518.7 ± 34.3; p < .05) and a 2% increase in sleep efficiency (p < 0.05). There was a corresponding increase in vigour (p < 0.001) and decrease in fatigue (p < 0.05). Improvements in measures of sleep efficiency, fatigue and vigour indicate that a sleep optimisation programme may improve athletes' well-being. More research is required into the effects of sleep optimisation on athletic performance.

  17. Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems

    NASA Astrophysics Data System (ADS)

    Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.

    2015-05-01

    Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.

  18. Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.

    2017-08-01

    We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95  <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase  <200 ms and for changes in the breathing period of  <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.

  19. Detection principle of gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Congedo, Giuseppe

    With the first two detections in late 2015, astrophysics has officially entered into the new era of gravitational wave (GW) observations. Since then, much has been going on in the field with a lot of work focusing on the observations and implications for astrophysics and tests of general relativity in the strong regime. However, much less is understood about how gravitational detectors really work at their fundamental level. For decades, the response to incoming signals has been customarily calculated using the very same physical principle, which has proved so successful in the first detections. In this paper, we review the physical principle that is behind such a detection at the very fundamental level, and we try to highlight the peculiar subtleties that make it so hard in practice. We will then mention how detectors are built starting from this fundamental measurement element.

  20. Metabolomic approach to identifying bioactive compounds in berries: advances toward fruit nutritional enhancement.

    PubMed

    Stewart, Derek; McDougall, Gordon J; Sungurtas, Julie; Verrall, Susan; Graham, Julie; Martinussen, Inger

    2007-06-01

    Plant polyphenolics continue to be the focus of attention with regard to their putative impact on human health. An increasing and ageing human population means that the focus on nutrition and nutritional enhancement or optimisation of our foodstuffs is paramount. Using the raspberry as a model, we have shown how modern metabolic profiling approaches can be used to identify the changes in the level of beneficial polyphenolics in fruit breeding segregating populations and how the level of these components is determined by genetic and/or environmental control. Interestingly, the vitamin C content appeared to be significantly influenced by environment (growth conditions) whilst the content of the polyphenols such as cyanidin, pelargonidin and quercetin glycosides appeared much more tightly regulated, suggesting a rigorous genetic control. Preliminary metabolic profiling showed that the fruit polyphenolic profiles divided into two gross groups segregating on the basis of relative levels of cyanidin-3-sophoroside and cyanidin-3-rutinoside, compounds implicated as conferring human health benefits.

  1. Optimisation of insect cell growth in deep-well blocks: development of a high-throughput insect cell expression screen.

    PubMed

    Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian

    2005-01-01

    This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.

  2. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  3. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  4. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    NASA Astrophysics Data System (ADS)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  5. The development of a tournament preparation framework for competitive golf: A Delphi study.

    PubMed

    Pilgrim, Jarred; Kremer, Peter; Robertson, Samuel

    2018-05-09

    Tournament preparation in golf is used by players to increase course knowledge, develop strategy, optimise playing conditions and facilitate self-regulation. It is not known whether specific behaviours in tournament preparation should be given priority in education and practice at different stages of competition. This study aimed to achieve consensus on the importance of specific tournament preparation behaviours or "items" to players of five competitive levels. A two-round Delphi study was used, including an expert panel of 36 coaches, high-performance staff, players and academics. Participants were asked to score the relative importance of 48 items to players using a 5-point Likert-type scale. For an item to achieve consensus, 67% agreement was required in two adjacent score categories. Consensus was reached for 46 items and these were used to develop a ranked framework for each competitive level. The developed framework provides consensus-based guidelines of the behaviours that are perceived as important in tournament preparation. This framework could be used by national sport organisations to guide the development of more comprehensive learning environments for players and coaches. It could also direct future studies examining the critical behaviours for golfers across different competitive levels.

  6. Novel genetically optimised high-displacement piezoelectric actuator with efficient use of active material

    NASA Astrophysics Data System (ADS)

    Poikselkä, Katja; Leinonen, Mikko; Palosaari, Jaakko; Vallivaara, Ilari; Röning, Juha; Juuti, Jari

    2017-09-01

    This paper introduces a new type of piezoelectric actuator, Mikbal. The Mikbal was developed from a Cymbal by adding steel structures around the steel cap to increase displacement and reduce the amount of piezoelectric material used. Here the parameters of the steel cap of Mikbal and Cymbal actuators were optimised by using genetic algorithms in combination with Comsol Multiphysics FEM modelling software. The blocking force of the actuator was maximised for different values of displacement by optimising the height and the top diameter of the end cap profile so that their effect on displacement, blocking force and stresses could be analysed. The optimisation process was done for five Mikbal- and two Cymbal-type actuators with different diameters varying between 15 and 40 mm. A Mikbal with a Ø 25 mm piezoceramic disc and a Ø 40 mm steel end cap was produced and the performances of unclamped measured and modelled cases were found to correspond within 2.8% accuracy. With a piezoelectric disc of Ø 25 mm, the Mikbal created 72% greater displacement while blocking force was decreased 57% compared with a Cymbal with the same size disc. Even with a Ø 20 mm piezoelectric disc, the Mikbal was able to generate ∼10% higher displacement than a Ø 25 mm Cymbal. Thus, the introduced Mikbal structure presents a way to extend the displacement capabilities of a conventional Cymbal actuator for low-to-moderate force applications.

  7. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  8. Optimisation of MSW collection routes for minimum fuel consumption using 3D GIS modelling.

    PubMed

    Tavares, G; Zsigraiova, Z; Semiao, V; Carvalho, M G

    2009-03-01

    Collection of municipal solid waste (MSW) may account for more than 70% of the total waste management budget, most of which is for fuel costs. It is therefore crucial to optimise the routing network used for waste collection and transportation. This paper proposes the use of geographical information systems (GIS) 3D route modelling software for waste collection and transportation, which adds one more degree of freedom to the system and allows driving routes to be optimised for minimum fuel consumption. The model takes into account the effects of road inclination and vehicle weight. It is applied to two different cases: routing waste collection vehicles in the city of Praia, the capital of Cape Verde, and routing the transport of waste from different municipalities of Santiago Island to an incineration plant. For the Praia city region, the 3D model that minimised fuel consumption yielded cost savings of 8% as compared with an approach that simply calculated the shortest 3D route. Remarkably, this was true despite the fact that the GIS-recommended fuel reduction route was actually 1.8% longer than the shortest possible travel distance. For the Santiago Island case, the difference was even more significant: a 12% fuel reduction for a similar total travel distance. These figures indicate the importance of considering both the relief of the terrain and fuel consumption in selecting a suitable cost function to optimise vehicle routing.

  9. Codon optimisation to improve expression of a Mycobacterium avium ssp. paratuberculosis-specific membrane-associated antigen by Lactobacillus salivarius.

    PubMed

    Johnston, Christopher; Douarre, Pierre E; Soulimane, Tewfik; Pletzer, Daniel; Weingart, Helge; MacSharry, John; Coffey, Aidan; Sleator, Roy D; O'Mahony, Jim

    2013-06-01

    Subunit and DNA-based vaccines against Mycobacterium avium ssp. paratuberculosis (MAP) attempt to overcome inherent issues associated with whole-cell formulations. However, these vaccines can be hampered by poor expression of recombinant antigens from a number of disparate hosts. The high G+C content of MAP invariably leads to a codon bias throughout gene expression. To investigate if the codon bias affects recombinant MAP antigen expression, the open reading frame of a MAP-specific antigen MptD (MAP3733c) was codon optimised for expression against a Lactobacillus salivarius host. Of the total 209 codons which constitute MAP3733c, 172 were modified resulting in a reduced G+C content from 61% for the native gene to 32.7% for the modified form. Both genes were placed under the transcriptional control of the PnisA promoter; allowing controlled heterologous expression in L. salivarius. Expression was monitored using fluorescence microscopy and microplate fluorometry via GFP tags translationally fused to the C-termini of the two MptD genes. A > 37-fold increase in expression was observed for the codon-optimised MAP3733synth variant over the native gene. Due to the low cost and improved expression achieved, codon optimisation significantly improves the potential of L. salivarius as an oral vaccine stratagem against Johne's disease. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  10. Shipboard Elevator Magnetic Sensor Development. Phase I, Laboratory Investigations.

    DTIC Science & Technology

    1981-08-19

    greater detail. The principles studied were those of the flux-meter and the flux-gate magnetometer . Of these two, the flux-gate magnetometer principle was...Abstract (Continued) Flux-gate magnetometers continuously sense the component of a stationary or slowly varying magnetic field along a chosen axis. The...distance of the sensor from the target’s line of travel, while precisely indicating displacements along the line. The modes of detection include level

  11. All-dielectric metamaterial frequency selective surface

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Qu, Shaobo; Li, Liyang; Wang, Jiafu; Feng, Mingde; Ma, Hua; Du, Hongliang; Xu, Zhuo

    Frequency selective surface (FSS) has been extensively studied due to its potential applications in radomes, antenna reflectors, high-impedance surfaces and absorbers. Recently, a new principle of designing FSS has been proposed and mainly studied in two levels. In the level of materials, dielectric materials instead of metallic patterns are capable of achieving more functional performance in FSS design. Moreover, FSSs made of dielectric materials can be used in different extreme environments, depending on their electrical, thermal or mechanical properties. In the level of design principle, the theory of metamaterial can be used to design FSS in a convenient and concise way. In this review paper, we provide a brief summary about the recent progress in all-dielectric metamaterial frequency selective surface (ADM-FSS). The basic principle of designing ADM-FSS is summarized. As significant tools, Mie theory and dielectric resonator (DR) theory are given which illustrate clearly how they are used in the FSS design. Then, several design cases including dielectric particle-based ADM-FSS and dielectric network-based ADM-FSS are introduced and reviewed. After a discussion of these two types of ADM-FSSs, we reviewed the existing fabrication techniques that are used in building the experiment samples. Finally, issues and challenges regarding the rapid fabrication techniques and further development aspects are discussed.

  12. [Cost analysis of patient blood management].

    PubMed

    Kleinerüschkamp, A G; Zacharowski, K; Ettwein, C; Müller, M M; Geisen, C; Weber, C F; Meybohm, P

    2016-06-01

    Patient blood management (PBM) is a multidisciplinary approach focusing on the diagnosis and treatment of preoperative anaemia, the minimisation of blood loss, and the optimisation of the patient-specific anaemia reserve to improve clinical outcomes. Economic aspects of PBM have not yet been sufficiently analysed. The aim of this study is to analyse the costs associated with the clinical principles of PBM and the project costs associated with the implementation of a PBM program from an institutional perspective. Patient-related costs of materials and services were analysed at the University Hospital Frankfurt for 2013. Personnel costs of all major processes were quantified based on the time required to perform each step. Furthermore, general project costs of the implementation phase were determined. Direct costs of transfusing a single unit of red blood cells can be calculated to a minimum of €147.43. PBM-associated costs varied depending on individual patient requirements. The following costs per patient were calculated: diagnosis of preoperative anaemia €48.69-123.88; treatment of preoperative anaemia (including iron-deficiency anaemia and megaloblastic anaemia) €12.61-127.99; minimising perioperative blood loss (including point-of-care diagnostics, coagulation management and cell salvage) €3.39-1,901.81; and costs associated with the optimisation of the tolerance to anaemia (including patient monitoring and volume therapy) €28.62. General project costs associated with the implementation of PBM were €24,998.24. PBM combines various alternatives to the transfusion of red blood cells and improves clinical outcome. Costs of PBM vary from institution to institution and depend on the extent to which different aspects of PBM have been implemented. The quantification of costs associated with PBM is essential in order to assess the economic impact of PBM, and thereby, to efficiently re-allocate health care resources. Costs were determined at a single university hospital. Thus, further analyses of both the costs of transfusion and the costs of PBM-principles will be necessary to evaluate the cost-effectiveness of PBM.

  13. On the design and optimisation of new fractal antenna using PSO

    NASA Astrophysics Data System (ADS)

    Rani, Shweta; Singh, A. P.

    2013-10-01

    An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.

  14. In vitro synthesis of 9,10-dihydroxyhexadecanoic acid using recombinant Escherichia coli.

    PubMed

    Kaprakkaden, Anees; Srivastava, Preeti; Bisaria, Virendra Swarup

    2017-05-18

    Hydroxy fatty acids are widely used in food, chemical and cosmetic industries. A variety of dihydroxy fatty acids have been synthesized so far; however, no studies have been done on the synthesis of 9,10-dihydroxyhexadecanoic acid. In the present study recombinant E. coli has been used for the heterologous expression of fatty acid hydroxylating enzymes and the whole cell lysate of the induced culture was used for in vitro production of 9,10-dihydroxyhexadecanoic acid. A first of its kind proof of principle has been successfully demonstrated for the production of 9,10-dihydroxyhexadecanoic acid using three different enzymes viz. fatty acid desaturase (FAD) from Saccharomyces cerevisiae, epoxide hydrolase (EH) from Caenorhabditis elegance and epoxygenase (EPOX) from Stokasia laevis. The genes for these proteins were codon-optimised, synthesised and cloned in pET 28a (+) vector. The culture conditions for induction of these three proteins in E. coli were optimised in shake flask. The induced cell lysates were used both singly and in combination along with the trans-supply of hexadecanoic acid and 9-hexadecenoic acid, followed by product profiling by GC-MS. Formation of 9,10-dihydroxyhexadecanoic acid was successfully achieved when combination of induced cell lysates of recombinant E. coli containing FAD, EH, and EPOX were incubated with 9-hexadecenoic acid. The in vitro production of 9,10-dihydroxyhexadecanoic acid synthesis using three fatty acid modification genes from different sources has been successfully demonstrated. The strategy adopted can be used for the production of similar compounds.

  15. Determination of quaternary ammonium compounds in seawater samples by solid-phase extraction and liquid chromatography-mass spectrometry.

    PubMed

    Bassarab, P; Williams, D; Dean, J R; Ludkin, E; Perry, J J

    2011-02-04

    A method for the simultaneous determination of two biocidal quaternary ammonium compounds; didecyldimethylammonium chloride (didecyldimethyl quat) and dodecylbenzyldimethylammonium chloride (benzyl quat), in seawater by solid phase extraction (SPE) followed by liquid chromatography-mass spectrometry (LC-MS) was developed. The optimised procedure utilised off-line extraction of the analytes from seawater using polymeric (Strata-X) SPE cartridges. Recoveries ranged from 80 to 105%, with detection limits at the low parts-per-trillion (ng/l) level for both analytes. To demonstrate sensitivity, environmental concentrations were measured at three different locations along the North East coast of England with measured values in the range 120-270ng/l. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Using heuristic algorithms for capacity leasing and task allocation issues in telecommunication networks under fuzzy quality of service constraints

    NASA Astrophysics Data System (ADS)

    Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin

    2014-03-01

    Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.

  17. Application of face centred central composite design to optimise compression force and tablet diameter for the formulation of mechanically strong and fast disintegrating orodispersible tablets.

    PubMed

    Pabari, Ritesh M; Ramtoola, Zebunnissa

    2012-07-01

    A two factor, three level (3(2)) face centred, central composite design (CCD) was applied to investigate the main and interaction effects of tablet diameter and compression force (CF) on hardness, disintegration time (DT) and porosity of mannitol based orodispersible tablets (ODTs). Tablet diameters of 10, 13 and 15 mm, and CF of 10, 15 and 20 kN were studied. Results of multiple linear regression analysis show that both the tablet diameter and CF influence tablet characteristics. A negative value of regression coefficient for tablet diameter showed an inverse relationship with hardness and DT. A positive value of regression coefficient for CF indicated an increase in hardness and DT with increasing CF as a result of the decrease in tablet porosity. Interestingly, at the larger tablet diameter of 15 mm, while hardness increased and porosity decreased with an increase in CF, the DT was resistant to change. The optimised combination was a tablet of 15 mm diameter compressed at 15 kN showing a rapid DT of 37.7s and high hardness of 71.4N. Using these parameters, ODTs containing ibuprofen showed no significant change in DT (ANOVA; p>0.05) irrespective of the hydrophobicity of the ibuprofen. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Synergistic Effect of Hydrotrope and Surfactant on Solubility and Dissolution of Atorvastatin Calcium: Screening Factorial Design Followed by Ratio Optimization

    PubMed Central

    Patel, V. F.; Sarai, J.

    2014-01-01

    The present study was aimed at investigating the effect of hydrotrope and surfactant on poor solubility of atorvastatin calcium. Excipients screening followed by factorial design was performed to study effect of excipients and manufacturing methods on solubility of drug. Three independent factors (carrier, surfactant and manufacturing method) were evaluated at two levels using solubility as a dependant variable. Solid-state characterisation was performed using Fourier transform infrared spectroscopy and differential scanning calorimetry. Optimised complex were incorporated into orally disintegrating micro tablets and in vitro dissolution test was performed. Nicotinamide, Plasdone and sodium dodecyl sulphate were emerged as promising excipients from excipient screening. General regression analysis revealed only the type of carrier has significantly enhanced (P<0.05) the solubility of drug while other factors were found to be nonsignificant. Ratio optimisation trial revealed that drug to nicotinamide ratio is more critical in enhancing the solubility of drug (40 fold increases in solubility compared to pure drug) in comparison to drug-surfactant ratio; however the presence of surfactant deemed essential. Significantly higher rate and extent of dissolution was observed from solid dispersion complex and tablets compared to dissolution of pure drug (P<0.05). Study revealed hydrotrope and surfactant have synergistic effect on solubility and dissolution of atorvastatin calcium and this can be explored further. PMID:25593381

  19. Technology Combination Analysis Tool (TCAT) for Active Debris Removal

    NASA Astrophysics Data System (ADS)

    Chamot, B.; Richard, M.; Salmon, T.; Pisseloup, A.; Cougnet, C.; Axthelm, R.; Saunder, C.; Dupont, C.; Lequette, L.

    2013-08-01

    This paper present the work of the Swiss Space Center EPFL within the CNES-funded OTV-2 study. In order to find the most performant Active Debris Removal (ADR) mission architectures and technologies, a tool was developed in order to design and compare ADR spacecraft, and to plan ADR campaigns to remove large debris. Two types of architectures are considered to be efficient: the Chaser (single-debris spacecraft), the Mothership/ Kits (multiple-debris spacecraft). Both are able to perform controlled re-entry. The tool includes modules to optimise the launch dates and the order of capture, to design missions and spacecraft, and to select launch vehicles. The propulsion, power and structure subsystems are sized by the tool thanks to high-level parametric models whilst the other ones are defined by their mass and power consumption. Final results are still under investigation by the consortium but two concrete examples of the tool's outputs are presented in the paper.

  20. Optimisation of techniques for quantification of Botrytis cinerea in grape berries and receptacles by quantitative polymerase chain reaction

    USDA-ARS?s Scientific Manuscript database

    Quantitative PCR (qPCR) can be used to detect and monitor pathogen colonization, but early attempts to apply the technology to Botrytis cinerea infection of grape berries have identified limitations to current techniques. In this study, four DNA extraction methods, two grinding methods, two grape or...

  1. Comparing Student Learning Experiences of In-Text Commentary and Rubric-Articulated Feedback: Strategies for Formative Assessment

    ERIC Educational Resources Information Center

    Nordrum, Lene; Evans, Katherine; Gustafsson, Magnus

    2013-01-01

    This study compares students' experiences of two types of criteria-based assessment: in-text commentary and rubric-articulated feedback, in an assessment design combining the two feedback channels. The main aim is to use students' responses to shed light on how feedback strategies for formative assessment can be optimised. Following action…

  2. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    PubMed

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.

  3. Comparison of two kinds of interface, based on guided navigation or usability principles, for improving the adoption of computerized decision support systems: application to the prescription of antibiotics.

    PubMed

    Tsopra, Rosy; Jais, Jean-Philippe; Venot, Alain; Duclos, Catherine

    2014-02-01

    It is important to consider the way in which information is presented by the interfaces of clinical decision support systems, to favor the adoption of these systems by physicians. Interface design can focus on decision processes (guided navigation) or usability principles. The aim of this study was to compare these two approaches in terms of perceived usability, accuracy rate, and confidence in the system. We displayed clinical practice guidelines for antibiotic treatment via two types of interface, which we compared in a crossover design. General practitioners were asked to provide responses for 10 clinical cases and the System Usability Scale (SUS) for each interface. We assessed SUS scores, the number of correct responses, and the confidence level for each interface. SUS score and percentage confidence were significantly higher for the interface designed according to usability principles (81 vs 51, p=0.00004, and 88.8% vs 80.7%, p=0.004). The percentage of correct responses was similar for the two interfaces. The interface designed according to usability principles was perceived to be more usable and inspired greater confidence among physicians than the guided navigation interface. Consideration of usability principles in the construction of an interface--in particular 'effective information presentation', 'consistency', 'efficient interactions', 'effective use of language', and 'minimizing cognitive load'--seemed to improve perceived usability and confidence in the system.

  4. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  5. A Case Study of Controlling Crossover in a Selection Hyper-heuristic Framework Using the Multidimensional Knapsack Problem.

    PubMed

    Drake, John H; Özcan, Ender; Burke, Edmund K

    2016-01-01

    Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.

  6. Reduction of verotoxigenic Escherichia coli by process and recipe optimisation in dry-fermented sausages.

    PubMed

    Heir, E; Holck, A L; Omer, M K; Alvseike, O; Høy, M; Måge, I; Axelsson, L

    2010-07-15

    Outbreaks of verotoxigenic Escherichia coli (VTEC) linked to dry-fermented sausages (DFSs) have emphasized the need for DFS manufacturers to introduce measures to obtain enhanced safety and still maintain the sensory qualities of their products. To our knowledge no data have yet been reported on non-O157:H7 VTEC survival in DFS. Here, the importance of recipe and process variables on VTEC (O157:H7 and O103:H25) reductions in two types of DFS, morr and salami, was determined through three statistically designed experiments. Linear regression and ANOVA analyses showed that no single variable had a dominant effect on VTEC reductions. High levels of NaCl, NaNO(2), glucose (low pH) and fermentation temperature gave enhanced VTEC reduction, while high fat and large casing diameter (a(w)) gave the opposite effect. Interaction effects were small. The process and recipe variables showed similar effects in morr and salami. In general, recipes combining high batter levels of salt (NaCl and NaNO(2)) and glucose along with high fermentation temperature that gave DFS with low final pH and a(w), provided approximately 3 log(10) reductions compared to approximately 1.5 log(10) reductions obtained for standard recipe DFS. Storage at 4 degrees C for 2 months provided log(10) 0.33-0.95 additional VTEC reductions and were only marginally affected by recipe type. Sensory tests revealed only small differences between the various recipes of morr and salami. By optimisation of recipe and process parameters, it is possible to obtain increased microbial safety of DFS while maintaining the sensory qualities of the sausages. 2010 Elsevier B.V. All rights reserved.

  7. A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics

    NASA Astrophysics Data System (ADS)

    Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio

    2017-07-01

    The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.

  8. Optimisation Modelling to Assess Cost of Dietary Improvement in Remote Aboriginal Australia

    PubMed Central

    Brimblecombe, Julie; Ferguson, Megan; Liberato, Selma C.; O'Dea, Kerin; Riley, Malcolm

    2013-01-01

    Background The cost and dietary choices required to fulfil nutrient recommendations defined nationally, need investigation, particularly for disadvantaged populations. Objective We used optimisation modelling to examine the dietary change required to achieve nutrient requirements at minimum cost for an Aboriginal population in remote Australia, using where possible minimally-processed whole foods. Design A twelve month cross-section of population-level purchased food, food price and nutrient content data was used as the baseline. Relative amounts from 34 food group categories were varied to achieve specific energy and nutrient density goals at minimum cost while meeting model constraints intended to minimise deviation from the purchased diet. Results Simultaneous achievement of all nutrient goals was not feasible. The two most successful models (A & B) met all nutrient targets except sodium (146.2% and 148.9% of the respective target) and saturated fat (12.0% and 11.7% of energy). Model A was achieved with 3.2% lower cost than the baseline diet (which cost approximately AUD$13.01/person/day) and Model B at 7.8% lower cost but with a reduction in energy of 4.4%. Both models required very large reductions in sugar sweetened beverages (−90%) and refined cereals (−90%) and an approximate four-fold increase in vegetables, fruit, dairy foods, eggs, fish and seafood, and wholegrain cereals. Conclusion This modelling approach suggested population level dietary recommendations at minimal cost based on the baseline purchased diet. Large shifts in diet in remote Aboriginal Australian populations are needed to achieve national nutrient targets. The modeling approach used was not able to meet all nutrient targets at less than current food expenditure. PMID:24391790

  9. Status of radiation protection in various interventional cardiology procedures in the Asia Pacific region

    PubMed Central

    Tsapaki, Virginia; Faruque Ghulam, Mohammed; Lim, Soo Teik; Ngo Minh, Hung; Nwe, Nwe; Sharma, Anil; Sim, Kui-Hian; Srimahachota, Suphot; Rehani, Madan Mohan

    2011-01-01

    Objective Increasing use of interventional procedures in cardiology with unknown levels of radiation protection in many countries of Asia-Pacific region necessitates the need for status assessment. The study was part of an International Atomic Energy Agency (IAEA) project for achieving improved radiation protection in interventional cardiology (IC) in developing countries. Design The survey covers 18 cardiac catheterisation laboratories in seven countries (Bangladesh, India, Malaysia, Myanmar, Singapore, Thailand and Vietnam). An important step was the creation of the ‘Asian network of Cardiologists in Radiation Protection’ and a newsletter. Data were collected on: radiation protection tools, number of IC laboratories, and annual number of various IC paediatric and adult procedures in the hospital and in the country. Patient radiation dose data were collected in terms of Kerma Area Product (KAP) and cumulative dose (CD). Results It is encouraging that protection devices for staff are largely used in the routine practice. Only 39% of the angiographic machines were equipped with a KAP meter. Operators' initial lack of awareness on radiation-protection optimisation improved significantly after participation in IAEA radiation-protection training. Only two out of five countries reporting patient percutaneous coronary intervention radiation-dose data were fully within the international guidance levels. Data from 51 patients who underwent multiple therapeutic procedures (median 2–3) indicated a total KAP reaching 995 Gy.cm2 (range 10.1–995) and CD 15.1 Gy (range 0.4–15.1), stressing the importance of dose monitoring and optimisation. Conclusions There is a need for interventional cardiology societies to play an active role in training actions and implementation of radiation protection. PMID:27325974

  10. Energy efficiency in membrane bioreactors.

    PubMed

    Barillon, B; Martin Ruel, S; Langlais, C; Lazarova, V

    2013-01-01

    Energy consumption remains the key factor for the optimisation of the performance of membrane bioreactors (MBRs). This paper presents the results of the detailed energy audits of six full-scale MBRs operated by Suez Environnement in France, Spain and the USA based on on-site energy measurement and analysis of plant operation parameters and treatment performance. Specific energy consumption is compared for two different MBR configurations (flat sheet and hollow fibre membranes) and for plants with different design, loads and operation parameters. The aim of this project was to understand how the energy is consumed in MBR facilities and under which operating conditions, in order to finally provide guidelines and recommended practices for optimisation of MBR operation and design to reduce energy consumption and environmental impacts.

  11. Optimisation of industrial wastes reuse as construction materials.

    PubMed

    Collivignarelli, C; Sorlini, S

    2001-12-01

    This study concerns the reuse of two inorganic wastes, foundry residues and fly ashes from municipal solid waste incineration, as "recycled aggregate" in concrete production. This kind of reuse was optimised by waste treatment with the following steps: waste washing with water; waste stabilisation-solidification treatment with inorganic reagents; final grinding of the stabilised waste after curing for about 10-20 days. Both the treated wastes were reused in concrete production with different mix-designs. Concrete specimens were characterised by means of conventional physical-mechanical tests (compression, elasticity modulus, shrinkage) and different leaching tests. Experimental results showed that a good structural and environmental quality of "recycled concrete" is due both to a correct waste treatment and to a correct mix-design for concrete mixture.

  12. Contextual Considerations for Teaching Online at the Postsecondary Level.

    ERIC Educational Resources Information Center

    Sawyer, Salley; Moore, Joi

    Designing an online class guided by the principles of learner-centered instruction at the postsecondary level is a daunting challenge for new professors facing the demands of a tenure track position. This paper presents the findings of a pilot project on two junior faculty members' experiences with online course development and teaching in a…

  13. Bully Proofing: What One District Learned about Improving School Climate.

    ERIC Educational Resources Information Center

    Berkey, Leonard G.; Keyes, Barbara J.; Longhurst, James E.

    2001-01-01

    Collaborating with school officials, a group of concerned parents implemented a bully-proofing program throughout their school district. After two years, the results are encouraging; it was received with enthusiasm at the elementary level. Further study will be needed to determine how these principles might be applied at the secondary level. (MKA)

  14. Ultrasound Imaging Velocimetry: a review

    NASA Astrophysics Data System (ADS)

    Poelma, Christian

    2017-01-01

    Whole-field velocity measurement techniques based on ultrasound imaging (a.k.a. `ultrasound imaging velocimetry' or `echo-PIV') have received significant attention from the fluid mechanics community in the last decade, in particular because of their ability to obtain velocity fields in flows that elude characterisation by conventional optical methods. In this review, an overview is given of the history, typical components and challenges of these techniques. The basic principles of ultrasound image formation are summarised, as well as various techniques to estimate flow velocities; the emphasis is on correlation-based techniques. Examples are given for a wide range of applications, including in vivo cardiovascular flow measurements, the characterisation of sediment transport and the characterisation of complex non-Newtonian fluids. To conclude, future opportunities are identified. These encompass not just optimisation of the accuracy and dynamic range, but also extension to other application areas.

  15. Management of physical health in patients with schizophrenia: practical recommendations.

    PubMed

    Heald, A; Montejo, A L; Millar, H; De Hert, M; McCrae, J; Correll, C U

    2010-06-01

    Improved physical health care is a pressing need for patients with schizophrenia. It can be achieved by means of a multidisciplinary team led by the psychiatrist. Key priorities should include: selection of antipsychotic therapy with a low risk of weight gain and metabolic adverse effects; routine assessment, recording and longitudinal tracking of key physical health parameters, ideally by electronic spreadsheets; and intervention to control CVD risk following the same principles as for the general population. A few simple tools to assess and record key physical parameters, combined with lifestyle intervention and pharmacological treatment as indicated, could significantly improve physical outcomes. Effective implementation of strategies to optimise physical health parameters in patients with severe enduring mental illness requires engagement and communication between psychiatrists and primary care in most health settings. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  16. Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.

    PubMed

    Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D

    2011-12-12

    We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America

  17. Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting

    NASA Astrophysics Data System (ADS)

    Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy

    2018-06-01

    Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.

  18. The optimisation of electrokinetic remediation for heavy metals and radioactivity contamination on Holyrood-Lunas soil (acrisol species) in Sri Gading Industrial Area, Batu Pahat, Johor, Malaysia.

    PubMed

    Mohamed Johar, S; Embong, Z

    2015-11-01

    The optimisation of electrokinetic remediation of an alluvial soil, locally named as Holyrood-Lunas from Sri Gading Industrial Area, Batu Pahat, Johor, Malaysia, had been conducted in this research. This particular soil was chosen due to its relatively high level of background radiation in a range between 139.2 and 539.4 nGy h(-1). As the background radiation is correlated to the amount of parent nuclides, (238)U and (232)Th, hence, a remediation technique, such as electrokinetic, is very useful in reducing these particular concentrations of heavy metal and radionuclides in soils. Several series of electrokinetics experiments were performed in laboratory scale in order to study the influence of certain electrokinetic parameters in soil. The concentration before (pre-electrokinetic) and after the experiment (post-electrokinetic) was determined via X-ray fluorescence (XRF) analysis technique. The best electrokinetic parameter that contributed to the highest achievable concentration removal of heavy metals and radionuclides on each experimental series was incorporated into a final electrokinetic experiment. Here, High Pure Germanium (HPGe) was used for radioactivity elemental analysis. The XRF results suggested that the most optimised electrokinetic parameters for Cr, Ni, Zn, As, Pb, Th and U were 3.0 h, 90 volts, 22.0 cm, plate-shaped electrode by 8 × 8 cm and in 1-D configuration order whereas the selected optimised electrokinetic parameters gave very low reduction of (238)U and (232)Th at 0.23 ± 2.64 and 2.74 ± 23.78 ppm, respectively. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. PROTECTED-UK - Clinical pharmacist interventions in the UK critical care unit: exploration of relationship between intervention, service characteristics and experience level.

    PubMed

    Rudall, Nicola; McKenzie, Catherine; Landa, June; Bourne, Richard S; Bates, Ian; Shulman, Rob

    2017-08-01

    Clinical pharmacist (CP) interventions from the PROTECTED-UK cohort, a multi-site critical care interventions study, were further analysed to assess effects of: time on critical care, number of interventions, CP expertise and days of week, on impact of intervention and ultimately contribution to patient care. Intervention data were collected from 21 adult critical care units over 14 days. Interventions could be error, optimisation or consults, and were blind-coded to ensure consistency, prior to bivariate analysis. Pharmacy service demographics were further collated by investigator survey. Of the 20 758 prescriptions reviewed, 3375 interventions were made (intervention rate 16.1%). CPs spent 3.5 h per day (mean, ±SD 1.7) on direct patient care, reviewed 10.3 patients per day (±SD 4.2) and required 22.5 min (±SD 9.5) per review. Intervention rate had a moderate inverse correlation with the time the pharmacist spent on critical care (P = 0.05; r = 0.4). Optimisation rate had a strong inverse association with total number of prescriptions reviewed per day (P = 0.001; r = 0.7). A consultant CP had a moderate inverse correlation with number of errors identified (P = 0.008; r = 0.6). No correlation existed between the presence of electronic prescribing in critical care and any intervention rate. Few centres provided weekend services, although the intervention rate was significantly higher on weekends than weekdays. A CP is essential for safe and optimised patient medication therapy; an extended and developed pharmacy service is expected to reduce errors. CP services should be adequately staffed to enable adequate time for prescription review and maximal therapy optimisation. © 2016 Royal Pharmaceutical Society.

  20. Optimisation of a double-centrifugation method for preparation of canine platelet-rich plasma.

    PubMed

    Shin, Hyeok-Soo; Woo, Heung-Myong; Kang, Byung-Jae

    2017-06-26

    Platelet-rich plasma (PRP) has been expected for regenerative medicine because of its growth factors. However, there is considerable variability in the recovery and yield of platelets and the concentration of growth factors in PRP preparations. The aim of this study was to identify optimal relative centrifugal force and spin time for the preparation of PRP from canine blood using a double-centrifugation tube method. Whole blood samples were collected in citrate blood collection tubes from 12 healthy beagles. For the first centrifugation step, 10 different run conditions were compared to determine which condition produced optimal recovery of platelets. Once the optimal condition was identified, platelet-containing plasma prepared using that condition was subjected to a second centrifugation to pellet platelets. For the second centrifugation, 12 different run conditions were compared to identify the centrifugal force and spin time to produce maximal pellet recovery and concentration increase. Growth factor levels were estimated by using ELISA to measure platelet-derived growth factor-BB (PDGF-BB) concentrations in optimised CaCl 2 -activated platelet fractions. The highest platelet recovery rate and yield were obtained by first centrifuging whole blood at 1000 g for 5 min and then centrifuging the recovered platelet-enriched plasma at 1500 g for 15 min. This protocol recovered 80% of platelets from whole blood and increased platelet concentration six-fold and produced the highest concentration of PDGF-BB in activated fractions. We have described an optimised double-centrifugation tube method for the preparation of PRP from canine blood. This optimised method does not require particularly expensive equipment or high technical ability and can readily be carried out in a veterinary clinical setting.

  1. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  2. MIND performance and prototyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cervera-Villanueva, A.

    2008-02-21

    The performance of MIND (Magnetised Iron Neutrino Detector) at a neutrino factory has been revisited in a new analysis. In particular, the low neutrino energy region is studied, obtaining an efficiency plateau around 5 GeV for a background level below 10{sup -3}. A first look has been given into the detector optimisation and prototyping.

  3. High-level ab initio studies of NO(X2Π)-O2(X3Σg -) van der Waals complexes in quartet states

    NASA Astrophysics Data System (ADS)

    Grein, Friedrich

    2018-05-01

    Geometry optimisations were performed on nine different structures of NO(X2Π)-O2(X3Σg-) van der Waals complexes in their quartet states, using the explicitly correlated RCCSD(T)-F12b method with basis sets up to the cc-pVQZ-F12 level. For the most stable configurations, counterpoise-corrected optimisations as well as extrapolations to the complete basis set (CBS) were performed. The X structure in the 4A‧ state was found to be most stable, with a CBS binding energy of -157 cm-1. The slipped tilted structures with N closer to O2 (Slipt-N), as well as the slipped parallel structure with O of NO closer to O2 (Slipp-O) in 4A″ states have binding energies of about -130 cm-1. C2v and linear complexes are less stable. According to calculated harmonic frequencies, the X isomer is bound. Isotropic hyperfine coupling constants of the complex are compared with those of the monomers.

  4. An efficient fermentation method for the degradation of cyanogenic glycosides in flaxseed.

    PubMed

    Wu, C-F; Xu, X-M; Huang, S-H; Deng, M-C; Feng, A-J; Peng, J; Yuan, J-P; Wang, J-H

    2012-01-01

    Recently, flaxseed has become increasingly popular in the health food market because it contains a considerable amount of specific beneficial nutrients such as lignans and omega-3 fatty acids. However, the presence of cyanogenic glycosides (CGs) in flaxseed severely limits the exploitation of its health benefits and nutritive value. We, therefore, developed an effective fermentation method, optimised by response surface methodology (RSM), for degrading CGs with an enzymatic preparation that includes 12.5% β-glucosidase and 8.9% cyanide hydratase. These optimised conditions resulted in a maximum CG degradation level of 99.3%, reducing the concentration of cyanide in the flaxseed power from 1.156 to 0.015 mg g(-1) after 48 h of fermentation. The avoidance of steam heat to evaporate hydrocyanic acid (HCN) results in lower energy consumption and no environmental pollution. In addition, the detoxified flaxseed retained the beneficial nutrients, lignans and fatty acids at the same level as untreated flaxseed, and this method could provide a new means of removing CGs from other edible plants, such as cassava, almond and sorghum by simultaneously expressing cyanide hydratase and β-glucosidase.

  5. Relative electronic and free energies of octane's unique conformations

    NASA Astrophysics Data System (ADS)

    Kirschner, Karl N.; Heiden, Wolfgang; Reith, Dirk

    2017-06-01

    This study reports the geometries and electronic energies of n-octane's unique conformations using perturbation methods that best mimic CCSD(T) results. In total, the fully optimised minima of n-butane (2 conformations), n-pentane (4 conformations), n-hexane (12 conformations) and n-octane (96 conformations) were investigated at several different theory levels and basis sets. We find that DF-MP2.5/aug-cc-pVTZ is in very good agreement with the more expensive CCSD(T) results. At this level, we can clearly confirm the 96 stable minima which were previously found using a reparameterised density functional theory (DFT). Excellent agreement was found between their DFT results and our DF-MP2.5 perturbation results. Subsequent Gibbs free energy calculations, using scaled MP2/aug-cc-pVTZ zero-point vibrational energy and frequencies, indicate a significant temperature dependency of the relative energies, with a change in the predicted global minimum. The results of this work will be important for future computational investigations of fuel-related octane reactions and for optimisation of molecular force fields (e.g. lipids).

  6. Efficient photoassociation of ultracold cesium atoms with picosecond pulse laser

    NASA Astrophysics Data System (ADS)

    Hai, Yang; Hu, Xue-Jin; Li, Jing-Lun; Cong, Shu-Lin

    2017-08-01

    We investigate theoretically the formation of ultracold Cs2 molecules via photoassociation (PA) with three kinds of pulses (the Gaussian pulse, the asymmetric shaped laser pulse SL1 with a large rising time and a small falling time and the asymmetric shaped laser pulse SL2 with a small rising time and a large falling time). For the three kinds of pulses, the final population on vibrational levels from v‧ = 120 to 175 of the excited state displays a regular oscillation change with pulse width and interaction strength, and a high PA efficiency can be achieved with optimised parameters. The PA efficiency in the excited state steered by the SL1-pulse (SL2-pulse) train with optimised parameters which is composed of four SL1 (SL2) pulses is 1.74 times as much as that by the single SL1 (SL2) pulse due to the population accumulation effect. Moreover, a dump laser is employed to transfer the excited molecules from the excited state to the vibrational level v″ = 12 of the ground state to obtain stable molecules.

  7. Development of a Safety Monitoring and Assurance System for chilled food products.

    PubMed

    Koutsoumanis, K; Taoukis, P S; Nychas, G J E

    2005-04-15

    The principles of a novel chill chain management policy, coded Safety Monitoring and Assurance System (SMAS) for the optimisation of the distribution of chilled food products within the chill chain are developed. In this system, a new approach based on actual risk evaluation at important points of the chill chain is used in order to promote products to the next stage of distribution. This evaluation based on product's time-temperature history, variation in product's characteristics (e.g. a(w), pH, etc.), and the use of predictive models for the growth of food pathogens, allows to give priority to products in such a way that risk at consumption time is minimized. The effectiveness of SMAS was evaluated against the First In First Out (FIFO) approach, the current method for food distribution, in a case study on the risk of listeriosis of cooked ham using the Monte Carlo simulation technique. Furthermore, the two approaches were compared for their effect on the quality of the products in terms of remaining shelf life at the time of consumption. The results showed that following the SMAS approach the risk of listerisosis is significantly lower while the spoiled products at the time of consumption are significantly reduced compared to FIFO approach.

  8. DFT study of adsorption and dissociation of thiophene molecules on Ni(1 1 0)

    NASA Astrophysics Data System (ADS)

    Morin, C.; Eichler, A.; Hirschl, R.; Sautet, P.; Hafner, J.

    2003-08-01

    The different adsorption possibilities of thiophene (C 4H 4S) on the Ni(1 1 0) surface have been studied using first principle local-density-functional calculations, with the Vienna ab initio simulation package, which is based on a plane wave basis set and projector augmented wave potentials. For each configuration, a geometric optimisation has been performed. A detailed analysis of the structural and electronic properties of the molecule and the surface in the most stable conformations is presented, showing the combined roles of the molecular distortion and the interactions between the molecule and the surface. Three structures with comparatively large adsorption energies are identified, all with the molecule plane parallel to the surface. Starting from these stabilised structures, various scenarios for the desulfurisation process have been envisaged. While, for the most stable structure, the formation of an adsorbed thiol is an activated process, with an energetic barrier of 0.70 eV, the two structures which are just a bit less stable can dissociate to a C 4H 4 species and a sulfur atom with barriers as low as 0.07 eV. A description of the different transition states and a kinetic analysis of the desulfurisation reaction is also presented.

  9. Principles of an enhanced MBR-process with mechanical cleaning.

    PubMed

    Rosenberger, S; Helmus, F P; Krause, S; Bareth, A; Meyer-Blumenroth, U

    2011-01-01

    Up to date, different physical and chemical cleaning protocols are necessary to limit membrane fouling in membrane bioreactors. This paper deals with a mechanical cleaning process, which aims at the avoidance of hypochlorite and other critical chemicals in MBR with submerged flat sheet modules. The process basically consists of the addition of plastic particles into the loop circulation within submerged membrane modules. Investigations of two pilot plants are presented: Pilot plant 1 is equipped with a 10 m(2) membrane module and operated with a translucent model suspension; pilot plant 2 is equipped with four 50 m(2) membrane modules and operated with pretreated sewage. Results of pilot plant 1 show that the establishment of a fluidised bed with regular particle distribution is possible for a variety of particles. Particles with maximum densities of 1.05 g/cm(3) and between 3 and 5 mm diameter form a stable fluidised bed almost regardless of activated sludge concentration, viscosity and reactor geometry. Particles with densities between 1.05 g/cm(3) and 1.2 g/cm(3) form a stable fluidised bed, if the velocity at the reactor bottom is sufficiently high. Activities within pilot plant 2 focused on plant optimisation and the development of an adequate particle retention system.

  10. Referral criteria and clinical decision support: radiological protection aspects for justification.

    PubMed

    Pérez, M del Rosario

    2015-06-01

    Advanced imaging technology has opened new horizons for medical diagnostics and improved patient care. However, many procedures are unjustified and do not provide a net benefit. An area of particular concern is the unnecessary use of radiation when clinical evaluation or other imaging modalities could provide an accurate diagnosis. Referral criteria for medical imaging are consensus statements based on the best-available evidence to assist the decision-making process when choosing the best imaging procedure for a given patient. Although they are advisory rather than compulsory, physicians should have good reasons for deviation from these criteria. Voluntary use of referral criteria has shown limited success compared with integration into clinical decision support systems. These systems support good medical practice, can improve health service delivery, and foster safer, more efficient, fair, cost-effective care, thus contributing to the strengthening of health systems. Justification of procedures and optimisation of protection, the two pillars of radiological protection in health care, are implicit in the notion of good medical practice. However, some health professionals are not familiar with these principles, and have low awareness of radiological protection aspects of justification. A stronger collaboration between radiation protection and healthcare communities could contribute to improve the radiation protection culture in medical practice. © The Chartered Institution of Building Services Engineers 2014.

  11. The first 3 minutes: Optimising a short realistic paediatric team resuscitation training session.

    PubMed

    McKittrick, Joanne T; Kinney, Sharon; Lima, Sally; Allen, Meredith

    2018-01-01

    Inadequate resuscitation leads to death or brain injury. Recent recommendations for resuscitation team training to complement knowledge and skills training highlighted the need for development of an effective team resuscitation training session. This study aimed to evaluate and revise an interprofessional team training session which addressed roles and performance during provision of paediatric resuscitation, through incorporation of real-time, real team simulated training episodes. This study was conducted applying the principles of action research. Two cycles of data collection, evaluation and refinement of a 30-40 minute resuscitation training session for doctors and nurses occurred. Doctors and nurses made up 4 groups of training session participants. Their responses to the training were evaluated through thematic analysis of rich qualitative data gathered in focus groups held immediately after each training session. Major themes included the importance of realism, teamwork, and reflective learning. Findings informed important training session changes. These included; committed in-situ training; team diversity; realistic resources; role flexibility, definition and leadership; increased debriefing time and the addition of a team goal. In conclusion, incorporation of interprofessional resuscitation training which addresses team roles and responsibilities into standard medical and nursing training will enhance preparedness for participation in paediatric resuscitation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. The harm principle as a mid-level principle?: three problems from the context of infectious disease control.

    PubMed

    Krom, André

    2011-10-01

    Effective infectious disease control may require states to restrict the liberty of individuals. Since preventing harm to others is almost universally accepted as a legitimate (prima facie) reason for restricting the liberty of individuals, it seems plausible to employ a mid-level harm principle in infectious disease control. Moral practices like infectious disease control support - or even require - a certain level of theory-modesty. However, employing a mid-level harm principle in infectious disease control faces at least three problems. First, it is unclear what we gain by attaining convergence on a specific formulation of the harm principle. Likely candidates for convergence, a harm principle aimed at preventing harmful conduct, supplemented by considerations of effectiveness and always choosing the least intrusive means still leave ample room for normative disagreement. Second, while mid-level principles are sometimes put forward in response to the problem of normative theories attaching different weight to moral principles, employing a mid-level harm principle completely leaves open how to determine what weight to attach to it in application. Third, there appears to be a trade-off between attaining convergence and finding a formulation of the harm principle that can justify liberty-restrictions in all situations of contagion, including interventions that are commonly allowed. These are not reasons to abandon mid-level theorizing altogether. But there is no reason to be too theory-modest in applied ethics. Morally justifying e.g. if a liberty-restriction in infectious disease control is proportional to the aim of harm-prevention, promptly requires moving beyond the mid-level harm principle. © 2011 Blackwell Publishing Ltd.

  13. Gender differences in visuospatial planning: an eye movements study.

    PubMed

    Cazzato, Valentina; Basso, Demis; Cutini, Simone; Bisiacchi, Patrizia

    2010-01-20

    Gender studies report a male advantage in several visuospatial abilities. Only few studies however, have evaluated differences in visuospatial planning behaviour with regard to gender. This study was aimed at exploring whether gender may affect the choice of cognitive strategies in a visuospatial planning task and, if oculomotor measures could assist in disentangling the cognitive processes involved. A computerised task based on the travelling salesperson problem paradigm, the Maps test, was used to investigate these issues. Participants were required to optimise time and space of a path travelling among a set of sub-goals in a spatially constrained environment. Behavioural results suggest that there are no gender differences in the initial visual processing of the stimuli, but rather during the execution of the plan, with males showing a shorter execution time and a higher path length optimisation than females. Males often showed changes of heuristics during the execution while females seemed to prefer a constant strategy. Moreover, a better performance in behavioural and oculomotor measures seemed to suggest that males are more able than females in either the optimisation of spatial features or the realisation of the planned scheme. Despite inconclusive findings, the results support previous research and provide insight into the level of cognitive processing involved in navigation and planning tasks, with regard to the influence of gender.

  14. Determination of acrylamide in coffee and chocolate by pressurised fluid extraction and liquid chromatography-tandem mass spectrometry.

    PubMed

    Pardo, O; Yusà, V; Coscollà, C; León, N; Pastor, A

    2007-07-01

    A selective and sensitive procedure has been developed and validated for the determination of acrylamide in difficult matrices, such as coffee and chocolate. The proposed method includes pressurised fluid extraction (PFE) with acetonitrile, florisil clean-up purification inside the PFE extraction cell and detection by liquid chromatography (LC) coupled to atmospheric pressure ionisation in positive mode tandem mass spectrometry (APCI-MS-MS). Comparison of ionisation sources (atmospheric pressure chemical ionisation (APCI), atmospheric pressure photoionization (APPI) and the combined APCI/APPI) and clean-up procedures were carried out to improve the analytical signal. The main parameters affecting the performance of the different ionisation sources were previously optimised using statistical design of experiments (DOE). PFE parameters were also optimised by DOE. For quantitation, an isotope dilution approach was used. The limit of quantification (LOQ) of the method was 1 microg kg(-1) for coffee and 0.6 microg kg(-1) for chocolate. Recoveries ranged between 81-105% in coffee and 87-102% in chocolate. The accuracy was evaluated using a coffee reference test material FAPAS T3008. Using the optimised method, 20 coffee and 15 chocolate samples collected from Valencian (Spain) supermarkets, were investigated for acrylamide, yielding median levels of 146 microg kg(-1) in coffee and 102 microg kg(-1) in chocolate.

  15. Optimisation of composite metallic fuel for minor actinide transmutation in an accelerator-driven system

    NASA Astrophysics Data System (ADS)

    Uyttenhove, W.; Sobolev, V.; Maschek, W.

    2011-09-01

    A potential option for neutralization of minor actinides (MA) accumulated in spent nuclear fuel of light water reactors (LWRs) is their transmutation in dedicated accelerator-driven systems (ADS). A promising fuel candidate dedicated to MA transmutation is a CERMET composite with Mo metal matrix and (Pu, Np, Am, Cm)O 2-x fuel particles. Results of optimisation studies of the CERMET fuel targeting to increasing the MA transmutation efficiency of the EFIT (European Facility for Industrial Transmutation) core are presented. In the adopted strategy of MA burning the plutonium (Pu) balance of the core is minimized, allowing a reduction in the reactivity swing and the peak power form-factor deviation and an extension of the cycle duration. The MA/Pu ratio is used as a variable for the fuel optimisation studies. The efficiency of MA transmutation is close to the foreseen theoretical value of 42 kg TW -1 h -1 when level of Pu in the actinide mixture is about 40 wt.%. The obtained results are compared with the reference case of the EFIT core loaded with the composite CERCER fuel, where fuel particles are incorporated in a ceramic magnesia matrix. The results of this study offer additional information for the EFIT fuel selection.

  16. Optimisation of culture composition for glyphosate degradation by Burkholderia vietnamiensis strain AQ5-12.

    PubMed

    Manogaran, Motharasan; Shukor, Mohd Yunus; Yasid, Nur Adeela; Khalil, Khalilah Abdul; Ahmad, Siti Aqlima

    2018-02-01

    The herbicide glyphosate is often used to control weeds in agricultural lands. However, despite its ability to effectively kill weeds at low cost, health problems are still reported due to its toxicity level. The removal of glyphosate from the environment is usually done by microbiological process since chemical process of degradation is ineffective due to the presence of highly stable bonds. Therefore, finding glyphosate-degrading microorganisms in the soil of interest is crucial to remediate this glyphosate. Burkholderia vietnamiensis strain AQ5-12 was found to have glyphosate-degrading ability. Optimisation of biodegradation condition was carried out utilising one factor at a time (OFAT) and response surface methodology (RSM). Five parameters including carbon and nitrogen source, pH, temperature and glyphosate concentration were optimised. Based on OFAT result, glyphosate degradation was observed to be optimum at fructose concentration of 6, 0.5 g/L ammonia sulphate, pH 6.5, temperature of 32 °C and glyphosate concentration at 100 ppm. Meanwhile, RSM resulted in a better degradation with 92.32% of 100 ppm glyphosate compared to OFAT. The bacterium was seen to tolerate up to 500 ppm glyphosate while increasing concentration results in reduced degradation and bacterial growth rate.

  17. Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System

    PubMed Central

    Bhudolia, Somen K.; Perrotey, Pavel; Joshi, Sunil C.

    2017-01-01

    For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium®. Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented. PMID:28772654

  18. Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System.

    PubMed

    Bhudolia, Somen K; Perrotey, Pavel; Joshi, Sunil C

    2017-03-15

    For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium ® . Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented.

  19. Constrained non-linear multi-objective optimisation of preventive maintenance scheduling for offshore wind farms

    NASA Astrophysics Data System (ADS)

    Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian

    2018-05-01

    Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.

  20. Optimising ICT Effectiveness in Instruction and Learning: Multilevel Transformation Theory and a Pilot Project in Secondary Education

    ERIC Educational Resources Information Center

    Mooij, Ton

    2004-01-01

    Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…

  1. Dietary changes needed to reach nutritional adequacy without increasing diet cost according to income: An analysis among French adults

    PubMed Central

    Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole

    2017-01-01

    Objective To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Materials and methods Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006–2007. Diet cost was estimated from mean national food prices (2006–2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. Results The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. Conclusions In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d. PMID:28358837

  2. Optimisation of Ferrochrome Addition Using Multi-Objective Evolutionary and Genetic Algorithms for Stainless Steel Making via AOD Converter

    NASA Astrophysics Data System (ADS)

    Behera, Kishore Kumar; Pal, Snehanshu

    2018-03-01

    This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.

  3. A novel swarm intelligence algorithm for finding DNA motifs.

    PubMed

    Lei, Chengwei; Ruan, Jianhua

    2009-01-01

    Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.

  4. Infrastructure optimisation via MBR retrofit: a design guide.

    PubMed

    Bagg, W K

    2009-01-01

    Wastewater management is continually evolving with the development and implementation of new, more efficient technologies. One of these is the Membrane Bioreactor (MBR). Although a relatively new technology in Australia, MBR wastewater treatment has been widely used elsewhere for over 20 years, with thousands of MBRs now in operation worldwide. Over the past 5 years, MBR technology has been enthusiastically embraced in Australia as a potential treatment upgrade option, and via retrofit typically offers two major benefits: (1) more capacity using mostly existing facilities, and (2) very high quality treated effluent. However, infrastructure optimisation via MBR retrofit is not a simple or low-cost solution and there are many factors which should be carefully evaluated before deciding on this method of plant upgrade. The paper reviews a range of design parameters which should be carefully evaluated when considering an MBR retrofit solution. Several actual and conceptual case studies are considered to demonstrate both advantages and disadvantages. Whilst optimising existing facilities and production of high quality water for reuse are powerful drivers, it is suggested that MBRs are perhaps not always the most sustainable Whole-of-Life solution for a wastewater treatment plant upgrade, especially by way of a retrofit.

  5. Markovian queue optimisation analysis with an unreliable server subject to working breakdowns and impatient customers

    NASA Astrophysics Data System (ADS)

    Liou, Cheng-Dar

    2015-09-01

    This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.

  6. Optimisation of a propagation-based x-ray phase-contrast micro-CT system

    NASA Astrophysics Data System (ADS)

    Nesterets, Yakov I.; Gureyev, Timur E.; Dimmock, Matthew R.

    2018-03-01

    Micro-CT scanners find applications in many areas ranging from biomedical research to material sciences. In order to provide spatial resolution on a micron scale, these scanners are usually equipped with micro-focus, low-power x-ray sources and hence require long scanning times to produce high resolution 3D images of the object with acceptable contrast-to-noise. Propagation-based phase-contrast tomography (PB-PCT) has the potential to significantly improve the contrast-to-noise ratio (CNR) or, alternatively, reduce the image acquisition time while preserving the CNR and the spatial resolution. We propose a general approach for the optimisation of the PB-PCT imaging system. When applied to an imaging system with fixed parameters of the source and detector this approach requires optimisation of only two independent geometrical parameters of the imaging system, i.e. the source-to-object distance R 1 and geometrical magnification M, in order to produce the best spatial resolution and CNR. If, in addition to R 1 and M, the system parameter space also includes the source size and the anode potential this approach allows one to find a unique configuration of the imaging system that produces the required spatial resolution and the best CNR.

  7. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  8. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  9. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  10. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  11. Differences in the utilisation of active power in squat and countermovement jumps.

    PubMed

    Ferraro, Damián; Fábrica, Gabriel

    2017-07-01

    The aim of this article was to understand how active power is used in squat and countermovement jumps. A simple empirical model comprising a mass, a spring, an active element and a damper, together with an optimisation principle, was used to identify the mechanical factors that maximise performance of jumps without countermovement (squat jumps, SJ) and with countermovement (CMJ). Twelve amateur volleyball players performed SJ from two initial positions and CMJ with two degrees of counterbalancing, while kinematic data were collected (jump height, push-off duration and position of the centre of mass). The model adjusted well to real data of SJ through all the impulse phase, and slightly less adequately at the end of this phase for CMJ. Nevertheless, it provides a satisfactory explanation for the generation and utilisation of active power for both type of jumps. On average, the estimated power of the active elements, the spring, and the damper were greater in the SJ. Based upon the result obtained with this model, we suggest that active power is best evaluated with SJ. The reason for this is that, during this kind of jump, the elements associated with the damper consume much of the energy produced by the active elements. The participation of the elements that consume the energy generated by the active elements is less in CMJ than in SJ, allowing for a better utilisation of this energy. In this way it is possible to achieve a better performance in CMJ with less active power.

  12. Spectral unmixing of multi-color tissue specific in vivo fluorescence in mice

    NASA Astrophysics Data System (ADS)

    Zacharakis, Giannis; Favicchio, Rosy; Garofalakis, Anikitos; Psycharakis, Stylianos; Mamalaki, Clio; Ripoll, Jorge

    2007-07-01

    Fluorescence Molecular Tomography (FMT) has emerged as a powerful tool for monitoring biological functions in vivo in small animals. It provides the means to determine volumetric images of fluorescent protein concentration by applying the principles of diffuse optical tomography. Using different probes tagged to different proteins or cells, different biological functions and pathways can be simultaneously imaged in the same subject. In this work we present a spectral unmixing algorithm capable of separating signal from different probes when combined with the tomographic imaging modality. We show results of two-color imaging when the algorithm is applied to separate fluorescence activity originating from phantoms containing two different fluorophores, namely CFSE and SNARF, with well separated emission spectra, as well as Dsred- and GFP-fused cells in F5-b10 transgenic mice in vivo. The same algorithm can furthermore be applied to tissue-specific spectroscopy data. Spectral analysis of a variety of organs from control, DsRed and GFP F5/B10 transgenic mice showed that fluorophore detection by optical systems is highly tissue-dependent. Spectral data collected from different organs can provide useful insight into experimental parameter optimisation (choice of filters, fluorophores, excitation wavelengths) and spectral unmixing can be applied to measure the tissue-dependency, thereby taking into account localized fluorophore efficiency. Summed up, tissue spectral unmixing can be used as criteria in choosing the most appropriate tissue targets as well as fluorescent markers for specific applications.

  13. Gold coast seaway smartrelease decision support system: optimising recycled water release in a sub tropical estuarine environment.

    PubMed

    Stuart, G; Hollingsworth, A; Thomsen, F; Szylkarski, S; Khan, S; Tomlinson, R; Kirkpatrick, S; Catterall, K; Capati, B

    2009-01-01

    Gold Coast Water is responsible for the management of the water, recycled water and wastewater assets of the City of the Gold Coast on Australia's east coast. Excess treated recycled water is released at the Gold Coast Seaway, a man-made channel connecting the Broadwater Estuary with the Pacific Ocean, on an outgoing tide in order for the recycled water to be dispersed before the tide changes and re-enters the Broadwater estuary. Rapid population growth has placed increasing demands on the city's recycled water release system and an investigation of the capacity of the Broadwater to assimilate a greater volume of recycled water over a longer release period was undertaken in 2007. As an outcome, Gold Coast Water was granted an extension of the existing release licence from 10.5 hours per day to 13.3 hours per day from the Coombabah wastewater treatment plant (WWTP). The Seaway SmartRelease Project has been designed to optimise the release of the recycled water from the Coombabah WWTP in order to minimise the impact to the receiving estuarine water quality and maximise the cost efficiency of pumping. In order achieve this; an optimisation study that involves intensive hydrodynamic and water quality monitoring, numerical modelling and a web-based decision support system is underway. An intensive monitoring campaign provided information on water levels, currents, winds, waves, nutrients and bacterial levels within the Broadwater. This data was then used to calibrate and verify numerical models using the MIKE by DHI suite of software. The Decision Support System will then collect continually measured data such as water levels, interact with the WWTP SCADA system, run the numerical models and provide the optimal time window to release the required amount of recycled water from the WWTP within the licence specifications.

  14. Sixth Form Colleges: Isomorphism, Adaptation and the New Education Market

    ERIC Educational Resources Information Center

    Stoten, David William

    2014-01-01

    The English education sector is undergoing major restructuring with the coalition government placing great emphasis on two key principles of policy: deregulation and marketisation. This development follows on two decades in which the British state has sought to raise performance levels and reduce costs through a variety of policies that are drawn…

  15. Location, timing and extent of wildfire vary by cause of ignition

    USGS Publications Warehouse

    Syphard, Alexandra D.; Keeley, Jon E.

    2015-01-01

    The increasing extent of wildfires has prompted investigation into alternative fire management approaches to complement the traditional strategies of fire suppression and fuels manipulation. Wildfire prevention through ignition reduction is an approach with potential for success, but ignitions result from a variety of causes. If some ignition sources result in higher levels of area burned, then ignition prevention programmes could be optimised to target these distributions in space and time. We investigated the most common ignition causes in two southern California sub-regions, where humans are responsible for more than 95% of all fires, and asked whether these causes exhibited distinct spatial or intra-annual temporal patterns, or resulted in different extents of fire in 10-29-year periods, depending on sub-region. Different ignition causes had distinct spatial patterns and those that burned the most area tended to occur in autumn months. Both the number of fires and area burned varied according to cause of ignition, but the cause of the most numerous fires was not always the cause of the greatest area burned. In both sub-regions, power line ignitions were one of the top two causes of area burned: the other major causes were arson in one sub-region and power equipment in the other. Equipment use also caused the largest number of fires in both sub-regions. These results have important implications for understanding why, where and how ignitions are caused, and in turn, how to develop strategies to prioritise and focus fire prevention efforts. Fire extent has increased tremendously in southern California, and because most fires are caused by humans, ignition reduction offers a potentially powerful management strategy, especially if optimised to reflect the distinct spatial and temporal distributions in different ignition causes.

  16. Methylene blue not ferrocene: Optimal reporters for electrochemical detection of protease activity.

    PubMed

    González-Fernández, Eva; Avlonitis, Nicolaos; Murray, Alan F; Mount, Andrew R; Bradley, Mark

    2016-10-15

    Electrochemical peptide-based biosensors are attracting significant attention for the detection and analysis of proteins. Here we report the optimisation and evaluation of an electrochemical biosensor for the detection of protease activity using self-assembled monolayers (SAMs) on gold surfaces, using trypsin as a model protease. The principle of detection was the specific proteolytic cleavage of redox-tagged peptides by trypsin, which causes the release of the redox reporter, resulting in a decrease of the peak current as measured by square wave voltammetry. A systematic enhancement of detection was achieved through optimisation of the properties of the redox-tagged peptide; this included for the first time a side-by-side study of the applicability of two of the most commonly applied redox reporters used for developing electrochemical biosensors, ferrocene and methylene blue, along with the effect of changing both the nature of the spacer and the composition of the SAM. Methylene blue-tagged peptides combined with a polyethylene-glycol (PEG) based spacer were shown to be the best platform for trypsin detection, leading to the highest fidelity signals (characterised by the highest sensitivity (signal gain) and a much more stable background than that registered when using ferrocene as a reporter). A ternary SAM (T-SAM) configuration, which included a PEG-based dithiol, minimised the non-specific adsorption of other proteins and was sensitive towards trypsin in the clinically relevant range, with a Limit of Detection (LoD) of 250pM. Kinetic analysis of the electrochemical response with time showed a good fit to a Michaelis-Menten surface cleavage model, enabling the extraction of values for kcat and KM. Fitting to this model enabled quantitative determination of the solution concentration of trypsin across the entire measurement range. Studies using an enzyme inhibitor and a range of real world possible interferents demonstrated a selective response to trypsin cleavage. This indicates that a PEG-based peptide, employing methylene blue as redox reporter, and deposited on an electrode as a ternary SAM configuration, is a suitable platform to develop clinically-relevant and quantitative electrochemical peptide-based protease biosensing. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Integrating the Principles of Socioecology and Critical Pedagogy for Health Promotion Health Literacy Interventions.

    PubMed

    Dawkins-Moultin, Lenna; McDonald, Andrea; McKyer, Lisako

    2016-01-01

    While health literacy research has experienced tremendous growth in the last two decades, the field still struggles to devise interventions that lead to lasting change. Most health literacy interventions are at the individual level and focus on resolving clinician-patient communication difficulties. As a result, the interventions use a deficit model that treats health literacy as a patient problem that needs to be fixed or circumvented. We propose that public health health literacy interventions integrate the principles of socioecology and critical pedagogy to develop interventions that build capacity and empower individuals and communities. Socioecology operates on the premise that health outcome is hinged on the interplay between individuals and their environment. Critical pedagogy assumes education is inherently political, and the ultimate goal of education is social change. Integrating these two approaches will provide a useful frame in which to develop interventions that move beyond the individual level.

  18. Maximisation Principles in Foodwebs and Daisyworlds

    NASA Astrophysics Data System (ADS)

    Ackland, G. J.; Gallagher, I. D.

    2005-12-01

    Using computer simulation we investigate whether the steady-state time averaged state of a self-organising system with many internal degrees of freedom can be described by optimising a single quantity. Our open systems follow evolutionary dynamics hence the conservation laws and energy-based state probabilities which underpin Hamiltonian dynamics do not apply. We find that these dynamics observe a novel optimality principle, that the system self-organises to a state which maximises the sustainable amount of replicating objects. We have studied a number of mathematical models of evolving replicating systems: daisyworlds[1], logistic map and generalized Lotka Volterra foodwebs[2]. Each is characterised by being (1) "open" - resources flow into and out of the system. (2) "self-regulating" - the inflow/outflow of resources is not fixed externally. (3) "evolving" - the increase in population at the next timestep depends on the population at the current timestep. These properties violate the assumptions made in deriving optimality principles such as free energy minimisation, maximum/mimimum entropy production etc., so it is unsurprising that they are not observed. The absence of a Hamiltonian for ecosystems is particularly problematic for coupled models of life and the environment - moreover there is ambiguity in defining an entropy for an ecosystem. By considering large and small species within the 2D daisyworld model we show that the appropriate measure comes from the interaction with the rest of the system, not the information theoretic entropy of the daisy field. We introduce evolution within the classic Lotka-Volterra model for interaction between species in an ecosystem. Generalisation to many species is straightforward, but the resulting network is usually unstable. By restricting the number of links between species it is possible to form a stable network by evolution - allowing some species to go extinct. This method can be used to generate arbitrarily large network, from which a treelike structure of trophic levels emerges, but typically the number of connection is much smaller than in real ecosystems. Here, we show that applying evolution to the strength of the links, rather than simply their existence, stabilises the entire network and generates a power-law distribution of link strengths. The network dynamics are chaotic, but as a whole tend towards maximising the use of resources. If the dynamics are linearised to remove the chaos, the scale-free link strengths also disappear. [1] Maximisation Principles and Daisyworld G.J. Ackland J.Theo.Bio. 227, 121, (2004) [2] Stabilization of large generalized Lotka-Volterra foodwebs by evolutionary feedback G.J. Ackland and I.D. Gallagher Phys Rev Lett 93 158701 2004

  19. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  20. Power generation based on biomass by combined fermentation and gasification--a new concept derived from experiments and modelling.

    PubMed

    Methling, Torsten; Armbrust, Nina; Haitz, Thilo; Speidel, Michael; Poboss, Norman; Braun-Unkhoff, Marina; Dieter, Heiko; Kempter-Regel, Brigitte; Kraaij, Gerard; Schliessmann, Ursula; Sterr, Yasemin; Wörner, Antje; Hirth, Thomas; Riedel, Uwe; Scheffknecht, Günter

    2014-10-01

    A new concept is proposed for combined fermentation (two-stage high-load fermenter) and gasification (two-stage fluidised bed gasifier with CO2 separation) of sewage sludge and wood, and the subsequent utilisation of the biogenic gases in a hybrid power plant, consisting of a solid oxide fuel cell and a gas turbine. The development and optimisation of the important processes of the new concept (fermentation, gasification, utilisation) are reported in detail. For the gas production, process parameters were experimentally and numerically investigated to achieve high conversion rates of biomass. For the product gas utilisation, important combustion properties (laminar flame speed, ignition delay time) were analysed numerically to evaluate machinery operation (reliability, emissions). Furthermore, the coupling of the processes was numerically analysed and optimised by means of integration of heat and mass flows. The high, simulated electrical efficiency of 42% including the conversion of raw biomass is promising for future power generation by biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST

    NASA Astrophysics Data System (ADS)

    Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2018-04-01

    We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.

  2. Influence of the Size of Cohorts in Adaptive Design for Nonlinear Mixed Effects Models: An Evaluation by Simulation for a Pharmacokinetic and Pharmacodynamic Model for a Biomarker in Oncology

    PubMed Central

    Lestini, Giulia; Dumont, Cyrielle; Mentré, France

    2015-01-01

    Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680

  3. Influence of the Size of Cohorts in Adaptive Design for Nonlinear Mixed Effects Models: An Evaluation by Simulation for a Pharmacokinetic and Pharmacodynamic Model for a Biomarker in Oncology.

    PubMed

    Lestini, Giulia; Dumont, Cyrielle; Mentré, France

    2015-10-01

    In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e., when no adaptation is performed, using wrong prior parameters. We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Estimation results of two-stage ADs and ξ * were close and much better than those obtained with ξ 0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three- and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement.

  4. A pragmatic multi-centre randomised controlled trial of fluid loading and level of dependency in high-risk surgical patients undergoing major elective surgery: trial protocol

    PubMed Central

    2010-01-01

    Background Patients undergoing major elective or urgent surgery are at high risk of death or significant morbidity. Measures to reduce this morbidity and mortality include pre-operative optimisation and use of higher levels of dependency care after surgery. We propose a pragmatic multi-centre randomised controlled trial of level of dependency and pre-operative fluid therapy in high-risk surgical patients undergoing major elective surgery. Methods/Design A multi-centre randomised controlled trial with a 2 * 2 factorial design. The first randomisation is to pre-operative fluid therapy or standard regimen and the second randomisation is to routine intensive care versus high dependency care during the early post-operative period. We intend to recruit 204 patients undergoing major elective and urgent abdominal and thoraco-abdominal surgery who fulfil high-risk surgical criteria. The primary outcome for the comparison of level of care is cost-effectiveness at six months and for the comparison of fluid optimisation is the number of hospital days after surgery. Discussion We believe that the results of this study will be invaluable in determining the future care and clinical resource utilisation for this group of patients and thus will have a major impact on clinical practice. Trial Registration Trial registration number - ISRCTN32188676 PMID:20398378

  5. From SED HI concept to Pleiades FM detection unit measurements

    NASA Astrophysics Data System (ADS)

    Renard, Christophe; Dantes, Didier; Neveu, Claude; Lamard, Jean-Luc; Oudinot, Matthieu; Materne, Alex

    2017-11-01

    The first flight model PLEIADES high resolution instrument under Thales Alenia Space development, on behalf of CNES, is currently in integration and test phases. Based on the SED HI detection unit concept, PLEIADES detection unit has been fully qualified before the integration at telescope level. The main radiometric performances have been measured on engineering and first flight models. This paper presents the results of performances obtained on the both models. After a recall of the SED HI concept, the design and performances of the main elements (charge coupled detectors, focal plane and video processing unit), detection unit radiometric performances are presented and compared to the instrument specifications for the panchromatic and multispectral bands. The performances treated are the following: - video signal characteristics, - dark signal level and dark signal non uniformity, - photo-response non uniformity, - non linearity and differential non linearity, - temporal and spatial noises regarding system definitions PLEIADES detection unit allows tuning of different functions: reference and sampling time positioning, anti-blooming level, gain value, TDI line number. These parameters are presented with their associated criteria of optimisation to achieve system radiometric performances and their sensitivities on radiometric performances. All the results of the measurements performed by Thales Alenia Space on the PLEIADES detection units demonstrate the high potential of the SED HI concept for Earth high resolution observation system allowing optimised performances at instrument and satellite levels.

  6. Educational Development in the State of Kuwait during the Years 1974/75 and 1976/76.

    ERIC Educational Resources Information Center

    Ministry of Education (Kuwait).

    Four main levels comprise the educational ladder in Kuwait: kindergarten, primary, intermediate, and secondary. Ministry of Education plans to improve education on these levels and in the institutes which offer two years of specialized training beyond secondary school are outlined in the document. The principles of separate but equal education for…

  7. Neural reuse leads to associative connections between concrete (physical) and abstract (social) concepts and motives.

    PubMed

    Wang, Yimeng; Bargh, John A

    2016-01-01

    Consistent with neural reuse theory, empirical tests of the related "scaffolding" principle of abstract concept development show that higher-level concepts "reuse" and are built upon fundamental motives such as survival, safety, and consumption. This produces mutual influence between the two levels, with far-ranging impacts from consumer behavior to political attitudes.

  8. Cell population heterogeneity and evolution towards drug resistance in cancer: Biological and mathematical assessment, theoretical treatment optimisation.

    PubMed

    Chisholm, Rebecca H; Lorenzi, Tommaso; Clairambault, Jean

    2016-11-01

    Drug-induced drug resistance in cancer has been attributed to diverse biological mechanisms at the individual cell or cell population scale, relying on stochastically or epigenetically varying expression of phenotypes at the single cell level, and on the adaptability of tumours at the cell population level. We focus on intra-tumour heterogeneity, namely between-cell variability within cancer cell populations, to account for drug resistance. To shed light on such heterogeneity, we review evolutionary mechanisms that encompass the great evolution that has designed multicellular organisms, as well as smaller windows of evolution on the time scale of human disease. We also present mathematical models used to predict drug resistance in cancer and optimal control methods that can circumvent it in combined therapeutic strategies. Plasticity in cancer cells, i.e., partial reversal to a stem-like status in individual cells and resulting adaptability of cancer cell populations, may be viewed as backward evolution making cancer cell populations resistant to drug insult. This reversible plasticity is captured by mathematical models that incorporate between-cell heterogeneity through continuous phenotypic variables. Such models have the benefit of being compatible with optimal control methods for the design of optimised therapeutic protocols involving combinations of cytotoxic and cytostatic treatments with epigenetic drugs and immunotherapies. Gathering knowledge from cancer and evolutionary biology with physiologically based mathematical models of cell population dynamics should provide oncologists with a rationale to design optimised therapeutic strategies to circumvent drug resistance, that still remains a major pitfall of cancer therapeutics. This article is part of a Special Issue entitled "System Genetics" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Protonation states of intermediates in the reaction mechanism of [NiFe] hydrogenase studied by computational methods.

    PubMed

    Dong, Geng; Ryde, Ulf

    2016-06-01

    The [NiFe] hydrogenases catalyse the reversible conversion of H2 to protons and electrons. The active site consists of a Fe ion with one carbon monoxide, two cyanide, and two cysteine (Cys) ligands. The latter two bridge to a Ni ion, which has two additional terminal Cys ligands. It has been suggested that one of the Cys residues is protonated during the reaction mechanism. We have used combined quantum mechanical and molecular mechanics (QM/MM) geometry optimisations, large QM calculations with 817 atoms, and QM/MM free energy simulations, using the TPSS and B3LYP methods with basis sets extrapolated to the quadruple zeta level to determine which of the four Cys residues is more favourable to protonate for four putative states in the reaction mechanism, Ni-SIa, Ni-R, Ni-C, and Ni-L. The calculations show that for all states, the terminal Cys-546 residue is most easily protonated by 14-51 kJ/mol, owing to a more favourable hydrogen-bond pattern around this residue in the protein.

  10. Optimising physical activity engagement during youth sport: a self-determination theory approach.

    PubMed

    Fenton, Sally A M; Duda, Joan L; Barrett, Timothy

    2016-10-01

    Research suggests participation in youth sport does not guarantee physical activity (PA) guidelines are met. Studies indicate few children achieve recommended levels of moderate-to-vigorous physical activity (MVPA) during their youth sport involvement, and habitual levels of MVPA are below the recommended 60 min per day. Informed by self-determination theory, this study examined whether the coach-created social environment and related player motivation predict variability in objectively measured MVPA within the youth sport setting. Seventy three male youth sport footballers (Mean age = 11.66 ± 1.62) completed a multisection questionnaire assessing their perceptions of the social environment created in youth sport (autonomy supportive and controlling) and motivation towards their football participation (autonomous and controlled). Intensity of PA during youth sport was measured using accelerometers (GT3X, ActiGraph). Results supported a model in which perceptions of autonomy support significantly and positively predicted autonomous motivation towards football, which in turn significantly and positively predicted youth sport MVPA (% time). A significant indirect effect was observed for perceptions of autonomy support on youth sport %MVPA via autonomous motivation. Results have implications for optimising MVPA engagement during youth sport and increasing daily MVPA towards recommended and health-enhancing levels on youth sport days.

  11. Development of Low-Fat Soft Dough Biscuits Using Carbohydrate-Based Fat Replacers

    PubMed Central

    Singh, Gurmukh; Kumbhar, B. K.

    2013-01-01

    Experiments were conducted to develop low-fat soft dough biscuits using carbohydrate-based fat replacers (maltodextrin and guar gum). A central composite rotatable design was used to optimise the level of sugar 24–36%, composite fat (fat 10.5–24.5%, maltodextrin 10.4–24%, and guar gum 0.1–0.5%), ammonium bicarbonate 0.5–2.5%, and water 20–24% for production of low-fat biscuits. Diameter (P < 0.01) and stress-strain ratio (P < 0.05) decreased significantly with increase in the amount of sugar. There was a significant decrease (P < 0.01) in spread ratio at high amount of water. Hardness was significantly affected by the interactions of ammonium bicarbonate with sugar (P < 0.05) and fat (P < 0.1). The optimum level of ingredients obtained for low-fat biscuits was sugar 31.7 g, fat 13.55 g, maltodextrin 21.15 g, guar gum 0.3 g, ammonium bicarbonate 2.21 g, and water 21 mL based on 100 g flour. The fat level in the optimised low-fat biscuit formulation was found to be 8.48% as compared to 22.65% in control; therefore, the reduction in fat was 62.5%. PMID:26904603

  12. Comparison of two kinds of interface, based on guided navigation or usability principles, for improving the adoption of computerized decision support systems: application to the prescription of antibiotics

    PubMed Central

    Tsopra, Rosy; Jais, Jean-Philippe; Venot, Alain; Duclos, Catherine

    2014-01-01

    Context It is important to consider the way in which information is presented by the interfaces of clinical decision support systems, to favor the adoption of these systems by physicians. Interface design can focus on decision processes (guided navigation) or usability principles. Objective The aim of this study was to compare these two approaches in terms of perceived usability, accuracy rate, and confidence in the system. Materials and methods We displayed clinical practice guidelines for antibiotic treatment via two types of interface, which we compared in a crossover design. General practitioners were asked to provide responses for 10 clinical cases and the System Usability Scale (SUS) for each interface. We assessed SUS scores, the number of correct responses, and the confidence level for each interface. Results SUS score and percentage confidence were significantly higher for the interface designed according to usability principles (81 vs 51, p=0.00004, and 88.8% vs 80.7%, p=0.004). The percentage of correct responses was similar for the two interfaces. Discussion/conclusion The interface designed according to usability principles was perceived to be more usable and inspired greater confidence among physicians than the guided navigation interface. Consideration of usability principles in the construction of an interface—in particular ‘effective information presentation’, ‘consistency’, ‘efficient interactions’, ‘effective use of language’, and ‘minimizing cognitive load’—seemed to improve perceived usability and confidence in the system. PMID:24008427

  13. 5-Point programme for sustainable plant protection.

    PubMed

    Frische, Tobias; Egerer, Sina; Matezki, Steffen; Pickl, Christina; Wogram, Jörn

    2018-01-01

    This position paper intends to stimulate a profound rethinking of contemporary agricultural practice. We criticise the current intensity of chemical plant protection in Germany as ecologically unsustainable and thus threatening the achievement of key targets of environmental protection and nature conservation policies. In the first part of the paper, we provide background information on the use of plant protection products (PPP) in German agriculture, the role of agricultural policy, European pesticide legislation, the principles of and framework for environmental risk assessment and risk management of PPP, as well as environmental effects of PPP. The second part is presented against the backdrop of the European "Sustainable Use Directive" (2009/128/EC). This directive requires that "Member States shall adopt National Action Plans to set up their quantitative objectives, targets, measures, and timetables to reduce risks and impacts of pesticide use on human health and the environment and to encourage the development and introduction of integrated pest management and of alternative approaches or techniques to reduce dependency on the use of pesticides." Reflecting on the corresponding debate in Germany, we suggest the following five key principles for a sustainable use of PPP and provide recommendations for their implementation: (1) minimising use; (2) identifying, quantifying, and communicating risks; (3) optimising risk management; (4) compensating for unavoidable effects; (5) internalising external costs.

  14. High intensity positron source at HFR: Basic concept, scoring and design optimisation

    NASA Astrophysics Data System (ADS)

    Zeman, A.; Tuček, K.; Debarberis, L.; Hogenbirk, A.

    2012-01-01

    Recent applications of positron beam techniques in various fields of research have led to an increasing demand for high intensity positron sources required for advanced applications, particularly in materials science. Considerable efforts are being made worldwide to design and set-up high intensity positron sources and beam systems that are based on several principles. Such positron sources could be used in fundamental and applied research experiments, as well as in industrial applications, especially in the field of condensed matter characterisation at the nanometre scale. Phenomena involving positrons are also important in other applied science fields such as medicine, biology, physics, energy, etc. However, such studies are often limited due to the relative lack of suitable positron sources. Results from the recently completed Exploratory Research Project called "HIPOS" are discussed in this paper, which describes the principles behind such a powerful very high intensity positron beam experimental facility that is based on a reactor source. Details of a proposed concept that uses nuclear reactions [(n, γ) and (γ, pair)] within a designed positron generator at the High Flux Reactor (HFR) in Petten are also discussed. The HIPOS source has been designed to produce slow positrons with intensity of the order of 10 10 e +/s.

  15. ICRP Publication 125: Radiological Protection in Security Screening.

    PubMed

    Cool, D A; Lazo, E; Tattersall, P; Simeonov, G; Niu, S

    2014-07-01

    The use of technologies to provide security screening for individuals and objects has been increasing rapidly, in keeping with the significant increase in security concerns worldwide. Within the spectrum of technologies, the use of ionizing radiation to provide backscatter and transmission screening capabilities has also increased. The Commission has previously made a number of statements related to the general topic of deliberate exposures of individuals in non-medical settings. This report provides advice on how the radiological protection principles recommended by the Commission should be applied within the context of security screening. More specifically, the principles of justification, optimisation of protection, and dose limitation for planned exposure situations are directly applicable to the use of ionising radiation in security screening. In addition, several specific topics are considered in this report, including the situation in which individuals may be exposed because they are concealed (‘stowaways’) in a cargo container or conveyance that may be subject to screening. The Commission continues to recommend that careful justification of screening should be considered before decisions are made to employ the technology. If a decision is made that its use is justified, the framework for protection as a planned exposure situation should be employed, including optimization of protection with the use of dose constraints and the appropriate provisions for authorisation and inspection.

  16. Overview of the JET results in support to ITER

    DOE PAGES

    Litaudon, X.; Abduallev, S.; Abhangi, M.; ...

    2017-06-15

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  17. Forecasting of Radiation Belts: Results From the PROGRESS Project.

    NASA Astrophysics Data System (ADS)

    Balikhin, M. A.; Arber, T. D.; Ganushkina, N. Y.; Walker, S. N.

    2017-12-01

    Forecasting of Radiation Belts: Results from the PROGRESS Project. The overall goal of the PROGRESS project, funded in frame of EU Horizon2020 programme, is to combine first principles based models with the systems science methodologies to achieve reliable forecasts of the geo-space particle radiation environment.The PROGRESS incorporates three themes : The propagation of the solar wind to L1, Forecast of geomagnetic indices, and forecast of fluxes of energetic electrons within the magnetosphere. One of the important aspects of the PROGRESS project is the development of statistical wave models for magnetospheric waves that affect the dynamics of energetic electrons such as lower band chorus, hiss and equatorial noise. The error reduction ratio (ERR) concept has been used to optimise the set of solar wind and geomagnetic parameters for organisation of statistical wave models for these emissions. The resulting sets of parameters and statistical wave models will be presented and discussed. However the ERR analysis also indicates that the combination of solar wind and geomagnetic parameters accounts for only part of the variance of the emissions under investigation (lower band chorus, hiss and equatorial noise). In addition, advances in the forecast of fluxes of energetic electrons, exploiting empirical models and the first principles IMPTAM model achieved by the PROGRESS project is presented.

  18. Overview of the JET results in support to ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X.; Abduallev, S.; Abhangi, M.

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  19. Comparison of different coatings in solid-phase microextraction for the determination of organochlorine pesticides in ground water.

    PubMed

    Pérez-Trujillo, J P; Frías, S; Conde, J E; Rodríguez-Delgado, M A

    2002-07-19

    A solid-phase microextraction (SPME) procedure using three commercialised fibers (Carbowax-divinylbenzene, Carboxen-polydimethylsiloxane and divinylbenzene-Carboxen-polydimethylsiloxane) is presented for the determination of a selected group of organochlorine compounds in water samples. The extraction performances of these compounds were compared using fibers with two and three coatings. The optimal experimental procedures for the adsorption and desorption of pesticides were determined. The limits of detection with the divinylbenzene-Carboxen-polydimethylsiloxane fiber at levels below ng l(-1) were similar or lower than values presented in the literature for several of these compounds using polydimethylsiloxane fiber. The advantages of using this fiber, such as no salt addition, are discussed. Finally, the optimised procedures were applied successfully for the determination of these compounds in polluted ground water samples.

  20. Igg Subclasses Targeting the Flagella of Salmonella enterica Serovar Typhimurium Can Mediate Phagocytosis and Bacterial Killing

    PubMed Central

    Goh, Yun Shan; Armour, Kathryn L; Clark, Michael R; Grant, Andrew J; Mastroeni, Pietro

    2016-01-01

    Invasive non-typhoidal Salmonella are a common cause of invasive disease in immuno-compromised individuals and in children. Multi-drug resistance poses challenges to disease control, with a critical need for effective vaccines. Flagellin is an attractive vaccine candidate due to surface exposure and high epitope copy number, but its potential as a target for opsonophacytic antibodies is unclear. We examined the effect of targeting flagella with different classes of IgG on the interaction between Salmonella Typhimurium and a human phagocyte-like cell line, THP-1. We tagged the FliC flagellar protein with a foreign CD52 mimotope (TSSPSAD) and bacteria were opsonized with a panel of humanised CD52 antibodies with the same antigen-binding V-region, but different constant regions. We found that IgG binding to flagella increases bacterial phagocytosis and reduces viable intracellular bacterial numbers. Opsonisation with IgG3, followed by IgG1, IgG4, and IgG2, resulted in the highest level of bacterial uptake and in the highest reduction in the intracellular load of viable bacteria. Taken together, our data provide proof-of-principle evidence that targeting flagella with antibodies can increase the antibacterial function of host cells, with IgG3 being the most potent subclass. These data will assist the rational design of urgently needed, optimised vaccines against iNTS disease. PMID:27366588

  1. [Optimising azathioprine treatment: determination of thiopurine methyltransferase activity and thiopurine metabolites].

    PubMed

    Alvarez Beltran, M; Infante Pina, D; Tormo Carnicé, R; Segarra Cantón, O; Redecillas Ferreiro, S

    2009-02-01

    Individualised doses of azathioprine (AZA) may be prescribed by monitoring the levels of the enzyme thiopurine methyltransferase (TPMT). The measurements of thiopurine metabolites of AZA, 6-thioguanine (6-TGN) and 6-methylmercaptopurine (6-MMP), have also been reported as new markers of AZA activity. To describe TPMT phenotype in our population and to establish a relationship between thiopurine metabolites,and therapeutic activity and adverse effects. Data on TPMT were retrospectively collected from 107 patients, and 6-TGN and 6-MMP levels in 18 patients currently on treatment with AZA (Crohn's disease 5, ulcerative colitis 5, autoimmune hepatitis 5). Mean value of TPMT was 20.19U/ml. None of the patients had a TPMT activity<5U/ml. Of the 18 patients on treatment, 13 showed sub-therapeutic levels of 6-TGN (<235pmol/8x10(8) red blood cells). Clinical remission was maintained in 45% of patients. Mean levels of 6-TGN in patients with clinical remission were 259pmol/8x10(8) red blood cells versus 209pmol/8x10(8) red blood cells in non-responders (p=0.37). There was an inverse relationship (r=-0.28) between TPMT and 6-TGN levels. Toxic effects occurred in 6 of 18 patients, with leukopenia in 5 and hyperamylasemia in 1. Determination of TPMT and monitoring of thiopurine metabolites allows AZA treatment to be optimised, although further studies are necessary to establish therapeutic effectiveness and toxicity ranges.

  2. Improving linear transport infrastructure efficiency by automated learning and optimised predictive maintenance techniques (INFRALERT)

    NASA Astrophysics Data System (ADS)

    Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele

    2017-09-01

    The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.

  3. Organic Rankine Cycle for Residual Heat to Power Conversion in Natural Gas Compressor Station. Part I: Modelling and Optimisation Framework

    NASA Astrophysics Data System (ADS)

    Chaczykowski, Maciej

    2016-06-01

    Basic organic Rankine cycle (ORC), and two variants of regenerative ORC have been considered for the recovery of exhaust heat from natural gas compressor station. The modelling framework for ORC systems has been presented and the optimisation of the systems was carried out with turbine power output as the variable to be maximized. The determination of ORC system design parameters was accomplished by means of the genetic algorithm. The study was aimed at estimating the thermodynamic potential of different ORC configurations with several working fluids employed. The first part of this paper describes the ORC equipment models which are employed to build a NLP formulation to tackle design problems representative for waste energy recovery on gas turbines driving natural gas pipeline compressors.

  4. Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project

    DTIC Science & Technology

    2005-05-01

    thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of

  5. Separable projection integrals for higher-order correlators of the cosmic microwave sky: Acceleration by factors exceeding 100

    NASA Astrophysics Data System (ADS)

    Briggs, J. P.; Pennycook, S. J.; Fergusson, J. R.; Jäykkä, J.; Shellard, E. P. S.

    2016-04-01

    We present a case study describing efforts to optimise and modernise "Modal", the simulation and analysis pipeline used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum (or three-point correlator) of the cosmic microwave background radiation. We focus on one particular element of the code: the projection of bispectra from the end of inflation to the spherical shell at decoupling, which defines the CMB we observe today. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular domain containing a sparse grid. We show that by employing separable methods this calculation can be reduced to a one-dimensional summation plus two integrations, reducing the overall dimensionality from four to three. The introduction of separable functions also solves the issue of the non-rectangular sparse grid. This separable method can become unstable in certain scenarios and so the slower non-separable integral must be calculated instead. We present a discussion of the optimisation of both approaches. We demonstrate significant speed-ups of ≈100×, arising from a combination of algorithmic improvements and architecture-aware optimisations targeted at improving thread and vectorisation behaviour. The resulting MPI/OpenMP hybrid code is capable of executing on clusters containing processors and/or coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3× and that running the same code across a combination of both microarchitectures improves performance-per-node by a factor of 3.38×. By making bispectrum calculations competitive with those for the power spectrum (or two-point correlator) we are now able to consider joint analysis for cosmological science exploitation of new data.

  6. Improved power steering with double and triple ring waveguide systems: the impact of the operating frequency.

    PubMed

    Kok, H P; de Greef, M; Borsboom, P P; Bel, A; Crezee, J

    2011-01-01

    Regional hyperthermia systems with 3D power steering have been introduced to improve tumour temperatures. The 3D 70-MHz AMC-8 system has two rings of four waveguides. The aim of this study is to evaluate whether T(90) will improve by using a higher operating frequency and whether further improvement is possible by adding a third ring. Optimised specific absorption rate (SAR) distributions were evaluated for a centrally located target in tissue-equivalent phantoms, and temperature optimisation was performed for five cervical carcinoma patients with constraints to normal tissue temperatures. The resulting T(90) and the thermal iso-effect dose (i.e. the number of equivalent min at 43°C) were evaluated and compared to the 2D 70-MHz AMC-4 system with a single ring of four waveguides. FDTD simulations were performed at 2.5 × 2.5 × 5 mm(3) resolution. The applied frequencies were 70, 100, 120, 130, 140 and 150 MHz. Optimised SAR distributions in phantoms showed an optimal SAR distribution at 140 MHz. For the patient simulations, an optimal increase in T(90) was observed at 130 MHz. For a two-ring system at 70 MHz the gain in T(90) was about 0.5°C compared to the AMC-4 system, averaged over the five patients. At 130 MHz the average gain in T(90) was ~1.5°C and ~2°C for a two and three-ring system, respectively. This implies an improvement of the thermal iso-effect dose with a factor ~12 and ~30, respectively. Simulations showed that a 130-MHz two-ring waveguide system yields significantly higher tumour temperatures compared to 70-MHz single-ring and double-ring waveguide systems. Temperatures were further improved with a 130-MHz triple-ring system.

  7. Clinical characterisation of pneumonia caused by atypical pathogens combining classic and novel predictors.

    PubMed

    Masiá, M; Gutiérrez, F; Padilla, S; Soldán, B; Mirete, C; Shum, C; Hernández, I; Royo, G; Martin-Hidalgo, A

    2007-02-01

    The aim of this study was to characterise community-acquired pneumonia (CAP) caused by atypical pathogens by combining distinctive clinical and epidemiological features and novel biological markers. A population-based prospective study of consecutive patients with CAP included investigation of biomarkers of bacterial infection, e.g., procalcitonin, C-reactive protein and lipopolysaccharide-binding protein (LBP) levels. Clinical, radiological and laboratory data for patients with CAP caused by atypical pathogens were compared by univariate and multivariate analysis with data for patients with typical pathogens and patients from whom no organisms were identified. Two predictive scoring models were developed with the most discriminatory variables from multivariate analysis. Of 493 patients, 94 had CAP caused by atypical pathogens. According to multivariate analysis, patients with atypical pneumonia were more likely to have normal white blood cell counts, have repetitive air-conditioning exposure, be aged <65 years, have elevated aspartate aminotransferase levels, have been exposed to birds, and have lower serum levels of LBP. Two different scoring systems were developed that predicted atypical pathogens with sensitivities of 35.2% and 48.8%, and specificities of 93% and 91%, respectively. The combination of selected patient characteristics and laboratory data identified up to half of the cases of atypical pneumonia with high specificity, which should help clinicians to optimise initial empirical therapy for CAP.

  8. Optimisation of an analytical method and results from the inter-laboratory comparison of the migration of regulated substances from food packaging into the new mandatory European Union simulant for dry foodstuffs.

    PubMed

    Jakubowska, Natalia; Beldì, Giorgia; Peychès Bach, Aurélie; Simoneau, Catherine

    2014-01-01

    This paper presents the outcome of the development, optimisation and validation at European Union level of an analytical method for using poly(2,6-diphenyl phenylene oxide--PPPO), which is stipulated in Regulation (EU) No. 10/2011, as food simulant E for testing specific migration from plastics into dry foodstuffs. Two methods for fortifying respectively PPPO and a low-density polyethylene (LDPE) film with surrogate substances that are relevant to food contact were developed. A protocol for cleaning the PPPO and an efficient analytical method were developed for the quantification of butylhydroxytoluene (BHT), benzophenone (BP), diisobutylphthalate (DiBP), bis(2-ethylhexyl) adipate (DEHA) and 1,2-cyclohexanedicarboxylic acid, diisononyl ester (DINCH) from PPPO. A protocol for a migration test from plastics using small migration cells was also developed. The method was validated by an inter-laboratory comparison (ILC) with 16 national reference laboratories for food contact materials in the European Union. This allowed for the first time data to be obtained on the precision and laboratory performance of both migration and quantification. The results showed that the validation ILC was successful even when taking into account the complexity of the exercise. The results showed that the method performance was 7-9% repeatability standard deviation (rSD) for most substances (regardless of concentration), with 12% rSD for the high level of BHT and for DiBP at very low levels. The reproducibility standard deviation results for the 16 European Union laboratories were in the range of 20-30% for the quantification from PPPO (for the three levels of concentrations of the five substances) and 15-40% from migration experiments from the fortified plastic at 60°C for 10 days and subsequent quantification. Considering the lack of data previously available in the literature, this work has demonstrated that the validation of a method is possible both for migration from a film and for quantification into a corresponding simulant for specific migration.

  9. Optimisation of contained Nicotiana tabacum cultivation for the production of recombinant protein pharmaceuticals.

    PubMed

    Colgan, Richard; Atkinson, Christopher J; Paul, Matthew; Hassan, Sally; Drake, Pascal M W; Sexton, Amy L; Santa-Cruz, Simon; James, David; Hamp, Keith; Gutteridge, Colin; Ma, Julian K-C

    2010-04-01

    Nicotiana tabacum is emerging as a crop of choice for production of recombinant protein pharmaceuticals. Although there is significant commercial expertise in tobacco farming, different cultivation practices are likely to be needed when the objective is to optimise protein expression, yield and extraction, rather than the traditional focus on biomass and alkaloid production. Moreover, pharmaceutical transgenic tobacco plants are likely to be grown initially within a controlled environment, the parameters for which have yet to be established. Here, the growth characteristics and functional recombinant protein yields for two separate transgenic tobacco plant lines were investigated. The impacts of temperature, day-length, compost nitrogen content, radiation and plant density were examined. Temperature was the only environmental variable to affect IgG concentration in the plants, with higher yields observed in plants grown at lower temperature. In contrast, temperature, supplementary radiation and plant density all affected the total soluble protein yield in the same plants. Transgenic plants expressing a second recombinant protein (cyanovirin-N) responded differently to IgG transgenic plants to elevated temperature, with an increase in cyanovirin-N concentration, although the effect of the environmental variables on total soluble protein yields was the same as the IgG plants. Planting density and radiation levels were important factors affecting variability of the two recombinant protein yields in transgenic plants. Phenotypic differences were observed between the two transgenic plant lines and non-transformed N. tabacum, but the effect of different growing conditions was consistent between the three lines. Temperature, day length, radiation intensity and planting density all had a significant impact on biomass production. Taken together, the data suggest that recombinant protein yield is not affected substantially by environmental factors other than growth temperature. Overall productivity is therefore correlated to biomass production, although other factors such as purification burden, extractability protein stability and quality also need to be considered in the optimal design of cultivation conditions.

  10. Descriptive analysis of a 1:1 physiotherapy outpatient intervention post primary lumbar discectomy: one arm of a small-scale parallel randomised controlled trial across two UK sites.

    PubMed

    Rushton, A; Calcutt, A; Heneghan, N; Heap, A; White, L; Calvert, M; Goodwin, P

    2016-11-09

    There is a lack of high-quality evidence for physiotherapy post lumbar discectomy. Substantial heterogeneity in treatment effects may be explained by variation in quality, administration and components of interventions. An optimised physiotherapy intervention may reduce heterogeneity and improve patient benefit. The objective was to describe, analyse and evaluate an optimised 1:1 physiotherapy outpatient intervention for patients following primary lumbar discectomy, to provide preliminary insights. A descriptive analysis of the intervention embedded within an external pilot and feasibility trial. Two UK spinal centres. Participants aged ≥18; post primary, single level, lumbar discectomy were recruited. The intervention encompassed education, advice, mobility and core stability exercises, progressive exercise, and encouragement of early return to work/activity. Patients received ≤8 sessions for ≤8 weeks, starting 4 weeks post surgery (baseline). Blinded outcome assessment at baseline and 12 weeks (post intervention) included the Roland Morris Disability Questionnaire. STarT Back data were collected at baseline. Statistical analyses summarised participant characteristics and preplanned descriptive analyses. Thematic analysis grouped related data. Twenty-two of 29 allocated participants received the intervention. STarT Back categorised n=16 (55%) participants 'not at low risk'. Physiotherapists identified reasons for caution for 8 (36%) participants, commonly risk of overdoing activity (n=4, 18%). There was no relationship between STarT Back and physiotherapists' evaluation of caution. Physiotherapists identified 154 problems (mean (SD) 5.36 (2.63)). Those 'not at low risk', and/or requiring caution presented with more problems, and required more sessions (mean (SD) 3.14 (1.16)). Patients present differently and therefore require tailored interventions. These differences may be identified using clinical reasoning and outcome data. ISRCTN33808269; post results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Heterologous expression of Aspergillus terreus fructosyltransferase in Kluyveromyces lactis.

    PubMed

    Spohner, Sebastian C; Czermak, Peter

    2016-06-25

    Fructo-oligosaccharides are prebiotic and hypocaloric sweeteners that are usually extracted from chicory. They can also be produced from sucrose using fructosyltransferases, but the only commercial enzyme suitable for this purpose is Pectinex Ultra, which is produced with Aspergillus aculeatus. Here we used the yeast Kluyveromyces lactis to express a secreted recombinant fructosyltransferase from the inulin-producing fungus Aspergillus terreus. A synthetic codon-optimised version of the putative β-fructofuranosidase ATEG 04996 (XP 001214174.1) from A. terreus NIH2624 was secreted as a functional protein into the extracellular medium. At 60°C, the purified A. terreus enzyme generated the same pattern of oligosaccharides as Pectinex Ultra, but at lower temperatures it also produced oligomers with up to seven units. We achieved activities of up to 986.4U/mL in high-level expression experiments, which is better than previous reports of optimised Aspergillus spp. fermentations. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Why we should understand the patient experience: clinical empathy and medicines optimisation.

    PubMed

    Jubraj, Barry; Barnett, Nina L; Grimes, Lesley; Varia, Sneha; Chater, Angel; Auyeung, Vivian

    2016-10-01

    To critically discuss the need for pharmacists to underpin their consultations with appropriate 'clinical empathy' as part of effective medicines optimisation. Use of literature around empathy, consultation and pharmacy practice to develop a case for greater clinical empathy in pharmacy consultations. Clinical empathy is defined from the literature and applied to pharmacy consultations, with a comparison to empathy in other clinical professions. Historical barriers to the embedding of clinical empathy into pharmacy consultations are also explored. We challenge the pharmacy profession to consider how clinical empathy should underpin consultations with a series of introspective questions and provide some sample questions to support pharmacy consultations. We also make the case for appropriate education and professional development of consultation skills at undergraduate and postgraduate level. We contend that patients' relationships with practitioners are critical, and a lack of empathy can impact the effectiveness of care. © 2016 Royal Pharmaceutical Society.

  13. Achieving an optimal allocation of resources for animal health surveillance, intervention and disease mitigation.

    PubMed

    Hasler, B; Delabouglise, A; Babo Martins, S

    2017-04-01

    The primary role of animal health economics is to inform decision-making by determining optimal investments for animal health. Animal health surveillance produces information to guide interventions. Consequently, investments in surveillance and intervention must be evaluated together. This article explores the different theoretical frameworks and methods developed to assess and optimise the spending of resources in surveillance and intervention and their technical interdependence. The authors present frameworks that define the relationship between health investment and losses due to disease, and the relationship between surveillance and intervention resources. Surveillance and intervention are usually considered as technical substitutes, since increased investments in surveillance reduce the level of intervention resources required to reach the same benefit. The authors also discuss approaches used to quantify externalities and non-monetary impacts. Finally, they describe common economic evaluation types, including optimisation, acceptability and least-cost studies.

  14. A Method for Decentralised Optimisation in Networks

    NASA Astrophysics Data System (ADS)

    Saramäki, Jari

    2005-06-01

    We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.

  15. Thermal buckling optimisation of composite plates using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Kamarian, S.; Shakeri, M.; Yas, M. H.

    2017-07-01

    Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.

  16. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  17. Who Really Answers the Questions? Using Glasser's Quality School Model in an Undergraduate Classroom

    ERIC Educational Resources Information Center

    Logan, Jennifer; Plumlee, Gerald L.

    2012-01-01

    The authors discuss the effectiveness of the Quality School model and active learning in an undergraduate classroom setting. They compare performance levels of students in two course sections of Principles of Macroeconomics and two sections of Managerial Communications. Students are given an opportunity to help shape the structure of the…

  18. Functional testing of topical skin formulations using an optimised ex vivo skin organ culture model.

    PubMed

    Sidgwick, G P; McGeorge, D; Bayat, A

    2016-07-01

    A number of equivalent-skin models are available for investigation of the ex vivo effect of topical application of drugs and cosmaceuticals onto skin, however many have their drawbacks. With the March 2013 ban on animal models for cosmetic testing of products or ingredients for sale in the EU, their utility for testing toxicity and effect on skin becomes more relevant. The aim of this study was to demonstrate proof of principle that altered expression of key gene and protein markers could be quantified in an optimised whole tissue biopsy culture model. Topical formulations containing green tea catechins (GTC) were investigated in a skin biopsy culture model (n = 11). Punch biopsies were harvested at 3, 7 and 10 days, and analysed using qRT-PCR, histology and HPLC to determine gene and protein expression, and transdermal delivery of compounds of interest. Reduced gene expression of α-SMA, fibronectin, mast cell tryptase, mast cell chymase, TGF-β1, CTGF and PAI-1 was observed after 7 and 10 days compared with treated controls (p < 0.05). Histological analysis indicated a reduction in mast cell tryptase and chymase positive cell numbers in treated biopsies compared with untreated controls at day 7 and day 10 (p < 0.05). Determination of transdermal uptake indicated that GTCs were detected in the biopsies. This model could be adapted to study a range of different topical formulations in both normal and diseased skin, negating the requirement for animal models in this context, prior to study in a clinical trial environment.

  19. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  20. Optimisation of active suspension control inputs for improved vehicle handling performance

    NASA Astrophysics Data System (ADS)

    Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor

    2016-11-01

    Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.

  1. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    NASA Astrophysics Data System (ADS)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  2. [Strategy and collaboration between medicinal chemists and pharmaceutical scientists for drug delivery systems].

    PubMed

    Mano, Takashi

    2013-01-01

    In order to successfully apply drug delivery systems (DDS) to new chemical entities (NCEs), collaboration between medicinal chemists and formulation scientists is critical for efficient drug discovery. Formulation scientists have to use 'language' that medicinal chemists understand to help promote mutual understanding, and medicinal chemists and formulation scientists have to set up strategies to use suitable DDS technologies at the discovery phase of the programmes to ensure successful transfer into the development phase. In this review, strategies of solubilisation formulation for oral delivery, inhalation delivery, nasal delivery and bioconjugation are all discussed. For example, for oral drug delivery, multiple initiatives can be proposed to improve the process to select an optimal delivery option for an NCE. From a technical perspective, formulation scientists have to explain the scope and limitations of formulations as some DDS technologies might be applicable only to limited chemical spaces. Other limitations could be the administered dose and, cost, time and resources for formulation development and manufacturing. Since DDS selection is best placed as part of lead-optimisation, formulation scientists need to be involved in discovery projects at lead selection and optimisation stages. The key to success in their collaboration is to facilitate communication between these two areas of expertise at both a strategic and scientific level. Also, it would be beneficial for medicinal chemists and formulation scientists to set common goals to improve the process of collaboration and build long term partnerships to improve DDS.

  3. A methodology for the optimisation of a mm-wave scanner

    NASA Astrophysics Data System (ADS)

    Stec, L. Zoë; Podd, Frank J. W.; Peyton, Anthony J.

    2016-10-01

    The need to detect non-metallic items under clothes to prevent terrorism at transport hubs is becoming vital. Millimetre wave technology is able to penetrate clothing, yet able to interact with objects concealed underneath. This paper considers active illumination using multiple transmitter and receiver antennas. The positioning of these antennas must achieve full body coverage, whilst minimising the number of antenna elements and the number of required measurements. It sets out a rapid simulation methodology, based on the Kirchhoff equations, to explore different scenarios for scanner architecture optimisation. The paper assumes that the electromagnetic waves used are at lower frequencies (say, 10-30 GHz) where the body temperature does not need to be considered. This range allows better penetration of clothing than higher frequencies, yet still provides adequate resolution. Since passengers vary greatly in shape and size, the system needs to be able to work well with a range of body morphologies. Thus we have used two very differently shaped avatars to test the portal simulations. This simulation tool allows many different avatars to be generated quickly. Findings from these simulations indicated that the dimensions of the avatar did indeed have an effect on the pattern of illumination, and that the data for each antenna pair can easily be combined to compare different antenna geometries for a given portal architecture, resulting in useful insights into antenna placement. The data generated could be analysed both quantitatively and qualitatively, at various levels of scale.

  4. Optimisation and validation of the analytical procedure for the determination of acrylamide in coffee by LC-MS/MS with SPE clean up.

    PubMed

    Gielecińska, Iwona; Mojska, Hanna

    2013-01-01

    Numerous studies have demonstrated acrylamide to be both neurotoxic and carcinogenic. At present it is widely recognised that acrylamide is mainly formed through the Maillard reaction from free asparagine and reducing sugars. The major sources of dietary acrylamide are potato products, processed cereals and coffee. To optimise and validate an analytical method for determining acrylamide in coffee by liquid chromatography and tandem mass spectrometry analysis (LC/MS/MS) using SPE clean-up. Analytical separation of acrylamide from roasted coffee was performed by liquid chromatography using a Hypercarb column followed by LC/MS/MS analysis, with 2,3,3-d3 acrylamide as an internal standard. The method was based on two purification steps: the first with hexane and Carrez solutions in order to remove of fat and to precipitate proteins, respectively; and the second with a solid-phase extraction (SPE) column which proved to be efficient in the elimination of the main chromatographic interferences. Limit of quantification (LOQ) for measuring acrylamide in coffee was 50 microg/kg. The described method demonstrates satisfactory precision (RSD = 2.5%), repeatability (RSD = 9.2%) and accuracy (mean recovery - 97.4%). Our results confirm that LC-MS/MS with SPE clean-up is selective and suitable for determination of acrylamide in coffee. Indeed, this method meets the criteria of EU Commission Recommendations (No. 2007/331/EC and No. 2010/307/EU), on the monitoring of acrylamide levels in food.

  5. Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks

    DTIC Science & Technology

    2015-04-01

    UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace

  6. Modelling and Analysis of a New Piezoelectric Dynamic Balance Regulator

    PubMed Central

    Du, Zhe; Mei, Xue-Song; Xu, Mu-Xun

    2012-01-01

    In this paper, a new piezoelectric dynamic balance regulator, which can be used in motorised spindle systems, is presented. The dynamic balancing adjustment mechanism is driven by an in-plane bending vibration from an annular piezoelectric stator excited by a high-frequency sinusoidal input voltage. This device has different construction, characteristics and operating principles than a conventional balance regulator. In this work, a dynamic model of the regulator is first developed using a detailed analytical method. Thereafter, MATLAB is employed to numerically simulate the relations between the dominant parameters and the characteristics of the regulator based on thedynamic model. Finally, experimental measurements are used to certify the validity of the dynamic model. Consequently, the mathematical model presented and analysed in this paper can be used as a tool for optimising the design of a piezoelectric dynamic balance regulator during steady state operation. PMID:23202182

  7. Detecting and locating light atoms from high-resolution STEM images: The quest for a single optimal design.

    PubMed

    Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S

    2016-11-01

    In the present paper, the optimal detector design is investigated for both detecting and locating light atoms from high resolution scanning transmission electron microscopy (HR STEM) images. The principles of detection theory are used to quantify the probability of error for the detection of light atoms from HR STEM images. To determine the optimal experiment design for locating light atoms, use is made of the so-called Cramér-Rao Lower Bound (CRLB). It is investigated if a single optimal design can be found for both the detection and location problem of light atoms. Furthermore, the incoming electron dose is optimised for both research goals and it is shown that picometre range precision is feasible for the estimation of the atom positions when using an appropriate incoming electron dose under the optimal detector settings to detect light atoms. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Diagnosing Allergic Contact Dermatitis Through Elimination, Perception, Detection and Deduction.

    PubMed

    Pongpairoj, Korbkarn; Puangpet, Pailin; Thaiwat, Supitchaya; McFadden, John P

    2017-10-01

    Several authors have commented upon the skills of detection required in making a diagnosis of allergic contact dermatitis. Here, we emphasise the search for clues in a systematic manner. We describe four stages as part of a systematic method for diagnosing allergic contact dermatitis. Firstly, elimination (or inclusion) of non-allergic diagnoses. Secondly, perception: the pre-patch test diagnosis and the 'three scenarios' principle. Thirdly, detection: optimising the sensitivity of the patch test process. Fourthly, deduction: diagnosing allergic contact dermatitis by associating the dermatitis with the allergen exposure. We further compare and contrast the pre-patch test history and examination with the markedly different one ('microhistory' and 'microexamination') used after patch testing. The importance of knowledge of contact dermatitis literature is emphasised with a review of recent publications. Finally, we also highlight the use of contact allergy profiling as an investigative tool in the diagnosis of allergic contact dermatitis.

  9. Technique for measurement of energy loss of proton in target medium

    NASA Astrophysics Data System (ADS)

    Khadke, U. V.

    2018-05-01

    Energy loss (EL) of charged particles in target medium needs special attention, when measurements are required to be done repeatedly over periods of couple of days. It is imperative to ensure that the measurements are not affected by the long term drifts of the accelerator beam energy and the associated electronic modules. For one such situation in measurement of EL of proton beam in thick target, we optimised and standardized the technique of measuring most probable energy loss of 24.774 MeV proton in aluminium target of thickness 330 mg/cm2. The paper described the method that we developed to ensure that our EL measurements were free from effects of drifts due to any associated electronic modules. The details of the energy spectrometer, basic principle and technique for energy loss measurements in target medium are described in this paper.

  10. An AC electroosmotic micropump for circular chromatographic applications.

    PubMed

    Debesset, S; Hayden, C J; Dalton, C; Eijkel, J C T; Manz, A

    2004-08-01

    Flow rates of up to 50 microm s(-1) have been successfully achieved in a closed-loop channel using an AC electroosmotic pump. The AC electroosmotic pump is made of an interdigitated array of unequal width electrodes located at the bottom of a channel, with an AC voltage applied between the small and the large electrodes. The flow rate was found to increase linearly with the applied voltage and to decrease linearly with the applied frequency. The pump is expected to be suitable for circular chromatography for the following reasons: the driving forces are distributed over the channel length and the pumping direction is set by the direction of the interdigitated electrodes. Pumping in a closed-loop channel can be achieved by arranging the electrode pattern in a circle. In addition the inherent working principle of AC electroosmotic pumping enables the independent optimisation of the channel height or the flow velocity.

  11. Safety and governance issues for neonatal transport services.

    PubMed

    Ratnavel, Nandiran

    2009-08-01

    Neonatal transport is a subspecialty within the field of neonatology. Transport services are developing rapidly in the United Kingdom (UK) with network demographics and funding patterns leading to a broad spectrum of service provision. Applying principles of clinical governance and safety to such a diverse landscape of transport services is challenging but finally receiving much needed attention. To understand issues of risk management associated with this branch of retrieval medicine one needs to look at the infrastructure of transport teams, arrangements for governance, risk identification, incident reporting, feedback and learning from experience. One also needs to look at audit processes, training, communication and ways of team working. Adherence to current recommendations for equipment and vehicle design are vital. The national picture for neonatal transport is evolving. This is an excellent time to start benchmarking and sharing best practice with a view to optimising safety and reducing risk.

  12. Conclusions: the future of antimicrobial therapy - Augmentin and beyond.

    PubMed

    Ball, Peter

    2007-12-01

    Since most infectious microorganisms inevitably develop resistance to any agents used to combat them, there has been a constant need to produce improved, more potent, antimicrobials. At least in part, the emergence and spread of resistant organisms has been provoked by inappropriate over-use of antibacterials. In the last decade, many fewer new antibacterials have been developed but overall prescribing has continued to increase. Consensus prescribing principles have now been defined with the aim of optimising therapy and preventing further increases in, or even to prompt a reduction in, the prevalence of resistance to antibacterial agents. Whilst it is important to encourage continued development of new classes of antibacterials, it is also vital to make the best use of available agents. The development of new dosages and formulations of amoxicillin/clavulanate allows this agent to continue to fill the important role in therapy which it has occupied, and continues to occupy, 25 years after it was launched.

  13. Optimisation techniques in vaginal cuff brachytherapy.

    PubMed

    Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A

    2009-11-01

    The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.

  14. Accounting 202, 302.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg.

    This teaching guide consists of guidelines for conducting two secondary-level introductory accounting courses. Intended for vocational business education students, the courses are designed to introduce financial principles and practices important to personal and business life, to promote development of clerical and bookkeeping skills sufficient…

  15. Investigation of Wind Turbine Rotor Concepts for Offshore Wind Farms

    NASA Astrophysics Data System (ADS)

    Ceyhan, Özlem; Grasso, Francesco

    2014-06-01

    Current plans in offshore wind energy developments call for further reduction of cost of energy. In order to contribute to this goal, several wind turbine rotor concepts have been investigated. Assuming the future offshore wind turbines will operate only in the offshore wind farms, the rotor concepts are not only evaluated for their stand-alone performances and their potential in reducing the loads, but also for their performance in an offshore wind farm. In order to do that, the 10MW reference wind turbine designed in Innwind.EU project is chosen as baseline. Several rotor parameters have been modified and their influences are investigated for offshore wind turbine design purposes. This investigation is carried out as a conceptual parametrical study. All concepts are evaluated numerically with BOT (Blade optimisation tool) software in wind turbine level and with Farmflow software in wind farm level for two wind farm layouts. At the end, all these concepts are compared with each other in terms of their advantages and disadvantages.

  16. BIANCA (Brain Intensity AbNormality Classification Algorithm): A new tool for automated segmentation of white matter hyperintensities.

    PubMed

    Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark

    2016-11-01

    Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. A support vector machine for predicting defibrillation outcomes from waveform metrics.

    PubMed

    Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J

    2014-03-01

    Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    PubMed

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Production of biosolid fuels from municipal sewage sludge: Technical and economic optimisation.

    PubMed

    Wzorek, Małgorzata; Tańczuk, Mariusz

    2015-08-01

    The article presents the technical and economic analysis of the production of fuels from municipal sewage sludge. The analysis involved the production of two types of fuel compositions: sewage sludge with sawdust (PBT fuel) and sewage sludge with meat and bone meal (PBM fuel). The technology of the production line of these sewage fuels was proposed and analysed. The main objective of the study is to find the optimal production capacity. The optimisation analysis was performed for the adopted technical and economic parameters under Polish conditions. The objective function was set as a maximum of the net present value index and the optimisation procedure was carried out for the fuel production line input capacity from 0.5 to 3 t h(-1), using the search step 0.5 t h(-1). On the basis of technical and economic assumptions, economic efficiency indexes of the investment were determined for the case of optimal line productivity. The results of the optimisation analysis show that under appropriate conditions, such as prices of components and prices of produced fuels, the production of fuels from sewage sludge can be profitable. In the case of PBT fuel, calculated economic indexes show the best profitability for the capacity of a plant over 1.5 t h(-1) output, while production of PBM fuel is beneficial for a plant with the maximum of searched capacities: 3.0 t h(-1). Sensitivity analyses carried out during the investigation show that influence of both technical and economic assessments on the location of maximum of objective function (net present value) is significant. © The Author(s) 2015.

  20. Concurrent enterprise: a conceptual framework for enterprise supply-chain network activities

    NASA Astrophysics Data System (ADS)

    Addo-Tenkorang, Richard; Helo, Petri T.; Kantola, Jussi

    2017-04-01

    Supply-chain management (SCM) in manufacturing industries has evolved significantly over the years. Recently, a lot more relevant research has picked up on the development of integrated solutions. Thus, seeking a collaborative optimisation of geographical, just-in-time (JIT), quality (customer demand/satisfaction) and return-on-investment (profits), aspects of organisational management and planning through 'best practice' business-process management - concepts and application; employing system tools such as certain applications/aspects of enterprise resource planning (ERP) - SCM systems information technology (IT) enablers to enhance enterprise integrated product development/concurrent engineering principles. This article assumed three main organisation theory applications in positioning its assumptions. Thus, proposing a feasible industry-specific framework not currently included within the SCOR model's level four (4) implementation level, as well as other existing SCM integration reference models such as in the MIT process handbook's - Process Interchange Format (PIF), the TOVE project, etc. which could also be replicated in other SCs. However, the wider focus of this paper's contribution will be concentrated on a complimentary proposed framework to the SCC's SCOR reference model. Quantitative empirical closed-ended questionnaires in addition to the main data collected from a qualitative empirical real-life industrial-based pilot case study were used: To propose a conceptual concurrent enterprise framework for SCM network activities. This research adopts a design structure matrix simulation approach analysis to propose an optimal enterprise SCM-networked value-adding, customised master data-management platform/portal for efficient SCM network information exchange and an effective supply-chain (SC) network systems-design teams' structure. Furthermore, social network theory analysis will be employed in a triangulation approach with statistical correlation analysis to assess the scale/level of frequency, importance, level of collaborative-ness, mutual trust as well as roles and responsibility among the enterprise SCM network for systems product development (PD) design teams' technical communication network as well as extensive literature reviews.

Top