Sample records for optimise rough set

  1. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  2. Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.

    PubMed

    Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip

    2015-11-01

    The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.

  3. Structure zone diagram and particle incorporation of nickel brush plated composite coatings

    PubMed Central

    Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.

    2017-01-01

    This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%. PMID:28300159

  4. Structure zone diagram and particle incorporation of nickel brush plated composite coatings

    NASA Astrophysics Data System (ADS)

    Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.

    2017-03-01

    This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.

  5. Structure zone diagram and particle incorporation of nickel brush plated composite coatings.

    PubMed

    Isern, L; Impey, S; Almond, H; Clouser, S J; Endrino, J L

    2017-03-16

    This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.

  6. Optimization of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using Taguchi

    NASA Astrophysics Data System (ADS)

    Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir

    2018-03-01

    Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.

  7. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    NASA Astrophysics Data System (ADS)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  8. Laser polishing of 3D printed mesoscale components

    NASA Astrophysics Data System (ADS)

    Bhaduri, Debajyoti; Penchev, Pavel; Batal, Afif; Dimov, Stefan; Soo, Sein Leung; Sten, Stella; Harrysson, Urban; Zhang, Zhenxue; Dong, Hanshan

    2017-05-01

    Laser polishing of various engineered materials such as glass, silica, steel, nickel and titanium alloys, has attracted considerable interest in the last 20 years due to its superior flexibility, operating speed and capability for localised surface treatment compared to conventional mechanical based methods. The paper initially reports results from process optimisation experiments aimed at investigating the influence of laser fluence and pulse overlap parameters on resulting workpiece surface roughness following laser polishing of planar 3D printed stainless steel (SS316L) specimens. A maximum reduction in roughness of over 94% (from ∼3.8 to ∼0.2 μm Sa) was achieved at the optimised settings (fluence of 9 J/cm2 and overlap factors of 95% and 88-91% along beam scanning and step-over directions respectively). Subsequent analysis using both X-ray photoelectron spectroscopy (XPS) and glow discharge optical emission spectroscopy (GDOES) confirmed the presence of surface oxide layers (predominantly consisting of Fe and Cr phases) up to a depth of ∼0.5 μm when laser polishing was performed under normal atmospheric conditions. Conversely, formation of oxide layers was negligible when operating in an inert argon gas environment. The microhardness of the polished specimens was primarily influenced by the input thermal energy, with greater sub-surface hardness (up to ∼60%) recorded in the samples processed with higher energy density. Additionally, all of the polished surfaces were free of the scratch marks, pits, holes, lumps and irregularities that were prevalent on the as-received stainless steel samples. The optimised laser polishing technology was consequently implemented for serial finishing of structured 3D printed mesoscale SS316L components. This led to substantial reductions in areal Sa and St parameters by 75% (0.489-0.126 μm) and 90% (17.71-1.21 μm) respectively, without compromising the geometrical accuracy of the native 3D printed samples.

  9. Understanding the role of monolayers in retarding evaporation from water storage bodies

    NASA Astrophysics Data System (ADS)

    Fellows, Christopher M.; Coop, Paul A.; Lamb, David W.; Bradbury, Ronald C.; Schiretz, Helmut F.; Woolley, Andrew J.

    2015-03-01

    Retardation of evaporation by monomolecular films by a 'barrier model' does not explain the effect of air velocity on relative evaporation rates in the presence and absence of such films. An alternative mechanism for retardation of evaporation attributes reduced evaporation to a reduction of surface roughness, which in turn increases the effective vapour pressure of water above the surface. Evaporation suppression effectiveness under field conditions should be predictable from measurements of the surface dilational modulus of monolayers and research directed to optimising this mechanism should be more fruitful than research aimed at optimising a monolayer to provide an impermeable barrier.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  11. Rough set classification based on quantum logic

    NASA Astrophysics Data System (ADS)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  12. Surface correlations of hydrodynamic drag for transitionally rough engineering surfaces

    NASA Astrophysics Data System (ADS)

    Thakkar, Manan; Busse, Angela; Sandham, Neil

    2017-02-01

    Rough surfaces are usually characterised by a single equivalent sand-grain roughness height scale that typically needs to be determined from laboratory experiments. Recently, this method has been complemented by a direct numerical simulation approach, whereby representative surfaces can be scanned and the roughness effects computed over a range of Reynolds number. This development raises the prospect over the coming years of having enough data for different types of rough surfaces to be able to relate surface characteristics to roughness effects, such as the roughness function that quantifies the downward displacement of the logarithmic law of the wall. In the present contribution, we use simulation data for 17 irregular surfaces at the same friction Reynolds number, for which they are in the transitionally rough regime. All surfaces are scaled to the same physical roughness height. Mean streamwise velocity profiles show a wide range of roughness function values, while the velocity defect profiles show a good collapse. Profile peaks of the turbulent kinetic energy also vary depending on the surface. We then consider which surface properties are important and how new properties can be incorporated into an empirical model, the accuracy of which can then be tested. Optimised models with several roughness parameters are systematically developed for the roughness function and profile peak turbulent kinetic energy. In determining the roughness function, besides the known parameters of solidity (or frontal area ratio) and skewness, it is shown that the streamwise correlation length and the root-mean-square roughness height are also significant. The peak turbulent kinetic energy is determined by the skewness and root-mean-square roughness height, along with the mean forward-facing surface angle and spanwise effective slope. The results suggest feasibility of relating rough-wall flow properties (throughout the range from hydrodynamically smooth to fully rough) to surface parameters.

  13. Model fitting for small skin permeability data sets: hyperparameter optimisation in Gaussian Process Regression.

    PubMed

    Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick

    2018-03-01

    The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.

  14. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  15. Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak

    2018-03-01

    The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.

  16. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  17. On Fuzzy Sets and Rough Sets from the Perspective of Indiscernibility

    NASA Astrophysics Data System (ADS)

    Chakraborty, Mihir K.

    The category theoretic approach of Obtułowicz to Pawlak's rough sets has been reintroduced in a somewhat modified form. A generalization is rendered to this approach that has been motivated by the notion of rough membership function. Thus, a link is established between rough sets and L-fuzzy sets for some special lattices. It is shown that a notion of indistinguishability lies at the root of vagueness. This observation in turn gives a common ground to the theories of rough sets and fuzzy sets.

  18. Evolving optimised decision rules for intrusion detection using particle swarm paradigm

    NASA Astrophysics Data System (ADS)

    Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.

    2012-12-01

    The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.

  19. Application of Rough Sets to Information Retrieval.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki

    1998-01-01

    Develops a method of rough retrieval, an application of the rough set theory to information retrieval. The aim is to: (1) show that rough sets are naturally applied to information retrieval in which categorized information structure is used; and (2) show that a fuzzy retrieval scheme is induced from the rough retrieval. (AEF)

  20. Optimising the Collaborative Practice of Nurses in Primary Care Settings Using a Knowledge Translation Approach

    ERIC Educational Resources Information Center

    Oelke, Nelly; Wilhelm, Amanda; Jackson, Karen

    2016-01-01

    The role of nurses in primary care is poorly understood and many are not working to their full scope of practice. Building on previous research, this knowledge translation (KT) project's aim was to facilitate nurses' capacity to optimise their practice in these settings. A Summit engaging Alberta stakeholders in a deliberative discussion was the…

  1. A shrinking hypersphere PSO for engineering optimisation problems

    NASA Astrophysics Data System (ADS)

    Yadav, Anupam; Deep, Kusum

    2016-03-01

    Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.

  2. The Study of Imperfection in Rough Set on the Field of Engineering and Education

    NASA Astrophysics Data System (ADS)

    Sheu, Tian-Wei; Liang, Jung-Chin; You, Mei-Li; Wen, Kun-Li

    Based on the characteristic of rough set, rough set theory overlaps with many other theories, especially with fuzzy set theory, evidence theory and Boolean reasoning methods. And the rough set methodology has found many real-life applications, such as medical data analysis, finance, banking, engineering, voice recognition, image processing and others. Till now, there is rare research associating to this issue in the imperfection of rough set. Hence, the main purpose of this paper is to study the imperfection of rough set in the field of engineering and education. First of all, we preview the mathematics model of rough set, and a given two examples to enhance our approach, which one is the weighting of influence factor in muzzle noise suppressor, and the other is the weighting of evaluation factor in English learning. Third, we also apply Matlab to develop a complete human-machine interface type of toolbox in order to support the complex calculation and verification the huge data. Finally, some further suggestions are indicated for the research in the future.

  3. Semantic distance as a critical factor in icon design for in-car infotainment systems.

    PubMed

    Silvennoinen, Johanna M; Kujala, Tuomo; Jokinen, Jussi P P

    2017-11-01

    In-car infotainment systems require icons that enable fluent cognitive information processing and safe interaction while driving. An important issue is how to find an optimised set of icons for different functions in terms of semantic distance. In an optimised icon set, every icon needs to be semantically as close as possible to the function it visually represents and semantically as far as possible from the other functions represented concurrently. In three experiments (N = 21 each), semantic distances of 19 icons to four menu functions were studied with preference rankings, verbal protocols, and the primed product comparisons method. The results show that the primed product comparisons method can be efficiently utilised for finding an optimised set of icons for time-critical applications out of a larger set of icons. The findings indicate the benefits of the novel methodological perspective into the icon design for safety-critical contexts in general. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  5. Design Optimisation of a Magnetic Field Based Soft Tactile Sensor

    PubMed Central

    Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert

    2017-01-01

    This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787

  6. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  7. On the design and optimisation of new fractal antenna using PSO

    NASA Astrophysics Data System (ADS)

    Rani, Shweta; Singh, A. P.

    2013-10-01

    An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.

  8. Intelligent System Development Using a Rough Sets Methodology

    NASA Technical Reports Server (NTRS)

    Anderson, Gray T.; Shelton, Robert O.

    1997-01-01

    The purpose of this research was to examine the potential of the rough sets technique for developing intelligent models of complex systems from limited information. Rough sets a simple but promising technology to extract easily understood rules from data. The rough set methodology has been shown to perform well when used with a large set of exemplars, but its performance with sparse data sets is less certain. The difficulty is that rules will be developed based on just a few examples, each of which might have a large amount of noise associated with them. The question then becomes, what is the probability of a useful rule being developed from such limited information? One nice feature of rough sets is that in unusual situations, the technique can give an answer of 'I don't know'. That is, if a case arises that is different from the cases the rough set rules were developed on, the methodology can recognize this and alert human operators of it. It can also be trained to do this when the desired action is unknown because conflicting examples apply to the same set of inputs. This summer's project was to look at combining rough set theory with statistical theory to develop confidence limits in rules developed by rough sets. Often it is important not to make a certain type of mistake (e.g., false positives or false negatives), so the rules must be biased toward preventing a catastrophic error, rather than giving the most likely course of action. A method to determine the best course of action in the light of such constraints was examined. The resulting technique was tested with files containing electrical power line 'signatures' from the space shuttle and with decompression sickness data.

  9. A Method for Decentralised Optimisation in Networks

    NASA Astrophysics Data System (ADS)

    Saramäki, Jari

    2005-06-01

    We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.

  10. Flu Diagnosis System Using Jaccard Index and Rough Set Approaches

    NASA Astrophysics Data System (ADS)

    Efendi, Riswan; Azah Samsudin, Noor; Mat Deris, Mustafa; Guan Ting, Yip

    2018-04-01

    Jaccard index and rough set approaches have been frequently implemented in decision support systems with various domain applications. Both approaches are appropriate to be considered for categorical data analysis. This paper presents the applications of sets operations for flu diagnosis systems based on two different approaches, such as, Jaccard index and rough set. These two different approaches are established using set operations concept, namely intersection and subset. The step-by-step procedure is demonstrated from each approach in diagnosing flu system. The similarity and dissimilarity indexes between conditional symptoms and decision are measured using Jaccard approach. Additionally, the rough set is used to build decision support rules. Moreover, the decision support rules are established using redundant data analysis and elimination of unclassified elements. A number data sets is considered to attempt the step-by-step procedure from each approach. The result has shown that rough set can be used to support Jaccard approaches in establishing decision support rules. Additionally, Jaccard index is better approach for investigating the worst condition of patients. While, the definitely and possibly patients with or without flu can be determined using rough set approach. The rules may improve the performance of medical diagnosis systems. Therefore, inexperienced doctors and patients are easier in preliminary flu diagnosis.

  11. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  12. rPM6 parameters for phosphorous and sulphur-containing open-shell molecules

    NASA Astrophysics Data System (ADS)

    Saito, Toru; Takano, Yu

    2018-03-01

    In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.

  13. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    NASA Astrophysics Data System (ADS)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  14. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    NASA Astrophysics Data System (ADS)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  15. Performance of Ti-multilayer coated tool during machining of MDN431 alloyed steel

    NASA Astrophysics Data System (ADS)

    Badiger, Pradeep V.; Desai, Vijay; Ramesh, M. R.

    2018-04-01

    Turbine forgings and other components are required to be high resistance to corrosion and oxidation because which they are highly alloyed with Ni and Cr. Midhani manufactures one of such material MDN431. It's a hard-to-machine steel with high hardness and strength. PVD coated insert provide an answer to problem with its state of art technique on the WC tool. Machinability studies is carried out on MDN431 steel using uncoated and Ti-multilayer coated WC tool insert using Taguchi optimisation technique. During the present investigation, speed (398-625rpm), feed (0.093-0.175mm/rev), and depth of cut (0.2-0.4mm) varied according to Taguchi L9 orthogonal array, subsequently cutting forces and surface roughness (Ra) were measured. Optimizations of the obtained results are done using Taguchi technique for cutting forces and surface roughness. Using Taguchi technique linear fit model regression analysis carried out for the combination of each input variable. Experimented results are compared and found the developed model is adequate which supported by proof trials. Speed, feed and depth of cut are linearly dependent on the cutting force and surface roughness for uncoated insert whereas Speed and depth of cut feed is inversely dependent in coated insert for both cutting force and surface roughness. Machined surface for coated and uncoated inserts during machining of MDN431 is studied using optical profilometer.

  16. Granularity refined by knowledge: contingency tables and rough sets as tools of discovery

    NASA Astrophysics Data System (ADS)

    Zytkow, Jan M.

    2000-04-01

    Contingency tables represent data in a granular way and are a well-established tool for inductive generalization of knowledge from data. We show that the basic concepts of rough sets, such as concept approximation, indiscernibility, and reduct can be expressed in the language of contingency tables. We further demonstrate the relevance to rough sets theory of additional probabilistic information available in contingency tables and in particular of statistical tests of significance and predictive strength applied to contingency tables. Tests of both type can help the evaluation mechanisms used in inductive generalization based on rough sets. Granularity of attributes can be improved in feedback with knowledge discovered in data. We demonstrate how 49er's facilities for (1) contingency table refinement, for (2) column and row grouping based on correspondence analysis, and (3) the search for equivalence relations between attributes improve both granularization of attributes and the quality of knowledge. Finally we demonstrate the limitations of knowledge viewed as concept approximation, which is the focus of rough sets. Transcending that focus and reorienting towards the predictive knowledge and towards the related distinction between possible and impossible (or statistically improbable) situations will be very useful in expanding the rough sets approach to more expressive forms of knowledge.

  17. Rough Evaluation Structure: Application of Rough Set Theory to Generate Simple Rules for Inconsistent Preference Relation

    NASA Astrophysics Data System (ADS)

    Gehrmann, Andreas; Nagai, Yoshimitsu; Yoshida, Osamu; Ishizu, Syohei

    Since management decision-making becomes complex and preferences of the decision-maker frequently becomes inconsistent, multi-attribute decision-making problems were studied. To represent inconsistent preference relation, the concept of evaluation structure was introduced. We can generate simple rules to represent inconsistent preference relation by the evaluation structures. Further rough set theory for the preference relation was studied and the concept of approximation was introduced. One of our main aims of this paper is to introduce a concept of rough evaluation structure for representing inconsistent preference relation. We apply rough set theory to the evaluation structure, and develop a method for generating simple rules for inconsistent preference relations. In this paper, we introduce concepts of totally ordered information system, similarity class of preference relation, upper and lower approximation of preference relations. We also show the properties of rough evaluation structure and provide a simple example. As an application of rough evaluation structure, we analyze questionnaire survey of customer preferences about audio players.

  18. Is ICRP guidance on the use of reference levels consistent?

    PubMed

    Hedemann-Jensen, Per; McEwan, Andrew C

    2011-12-01

    In ICRP 103, which has replaced ICRP 60, it is stated that no fundamental changes have been introduced compared with ICRP 60. This is true except that the application of reference levels in emergency and existing exposure situations seems to be applied inconsistently, and also in the related publications ICRP 109 and ICRP 111. ICRP 103 emphasises that focus should be on the residual doses after the implementation of protection strategies in emergency and existing exposure situations. If possible, the result of an optimised protection strategy should bring the residual dose below the reference level. Thus the reference level represents the maximum acceptable residual dose after an optimised protection strategy has been implemented. It is not an 'off-the-shelf item' that can be set free of the prevailing situation. It should be determined as part of the process of optimising the protection strategy. If not, protection would be sub-optimised. However, in ICRP 103 some inconsistent concepts have been introduced, e.g. in paragraph 279 which states: 'All exposures above or below the reference level should be subject to optimisation of protection, and particular attention should be given to exposures above the reference level'. If, in fact, all exposures above and below reference levels are subject to the process of optimisation, reference levels appear superfluous. It could be considered that if optimisation of protection below a fixed reference level is necessary, then the reference level has been set too high at the outset. Up until the last phase of the preparation of ICRP 103 the concept of a dose constraint was recommended to constrain the optimisation of protection in all types of exposure situations. In the final phase, the term 'dose constraint' was changed to 'reference level' for emergency and existing exposure situations. However, it seems as if in ICRP 103 it was not fully recognised that dose constraints and reference levels are conceptually different. The use of reference levels in radiological protection is reviewed. It is concluded that the recommendations in ICRP 103 and related ICRP publications seem to be inconsistent regarding the use of reference levels in existing and emergency exposure situations.

  19. The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management

    PubMed Central

    Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo

    2017-01-01

    Abstract Background and Aims The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012–2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. Methods During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. Results The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Conclusions Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. PMID:28981622

  20. Global Topology Optimisation

    DTIC Science & Technology

    2016-10-31

    statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally

  1. Assessment of physiological performance and perception of pushing different wheelchairs on indoor modular units simulating a surface roughness often encountered in under-resourced settings.

    PubMed

    Sasaki, Kotaro; Rispin, Karen

    2017-01-01

    In under-resourced settings where motorized wheelchairs are rarely available, manual wheelchair users with limited upper-body strength and functionalities need to rely on assisting pushers for their mobility. Because traveling surfaces in under-resourced settings are often unpaved and rough, wheelchair pushers could experience high physiological loading. In order to evaluate pushers' physiological loading and to improve wheelchair designs, we built indoor modular units that simulate rough surface conditions, and tested a hypothesis that pushing different wheelchairs would result in different physiological performances and pushers' perception of difficulty on the simulated rough surface. Eighteen healthy subjects pushed two different types of pediatric wheelchairs (Moti-Go manufactured by Motivation, and KidChair by Hope Haven) fitted with a 50-kg dummy on the rough and smooth surfaces at self-selected speeds. Oxygen uptake, traveling distance for 6 minutes, and the rating of difficulty were obtained. The results supported our hypothesis, showing that pushing Moti-Go on the rough surface was physiologically less loading than KidChair, but on the smooth surface, the two wheelchairs did not differ significantly. These results indicate wheelchair designs to improve pushers' performance in under-resourced settings should be evaluated on rough surfaces.

  2. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses

    PubMed Central

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  3. An Efficient Soft Set-Based Approach for Conflict Analysis

    PubMed Central

    Sutoyo, Edi; Mungad, Mungad; Hamid, Suraya; Herawan, Tutut

    2016-01-01

    Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%. PMID:26928627

  4. An Efficient Soft Set-Based Approach for Conflict Analysis.

    PubMed

    Sutoyo, Edi; Mungad, Mungad; Hamid, Suraya; Herawan, Tutut

    2016-01-01

    Conflict analysis has been used as an important tool in economic, business, governmental and political dispute, games, management negotiations, military operations and etc. There are many mathematical formal models have been proposed to handle conflict situations and one of the most popular is rough set theory. With the ability to handle vagueness from the conflict data set, rough set theory has been successfully used. However, computational time is still an issue when determining the certainty, coverage, and strength of conflict situations. In this paper, we present an alternative approach to handle conflict situations, based on some ideas using soft set theory. The novelty of the proposed approach is that, unlike in rough set theory that uses decision rules, it is based on the concept of co-occurrence of parameters in soft set theory. We illustrate the proposed approach by means of a tutorial example of voting analysis in conflict situations. Furthermore, we elaborate the proposed approach on real world dataset of political conflict in Indonesian Parliament. We show that, the proposed approach achieves lower computational time as compared to rough set theory of up to 3.9%.

  5. Cost optimisation and minimisation of the environmental impact through life cycle analysis of the waste water treatment plant of Bree (Belgium).

    PubMed

    De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B

    2011-01-01

    An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.

  6. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis

    PubMed Central

    Waterfall, Christy M.; Cobb, Benjamin D.

    2001-01-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702

  7. Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.

    PubMed

    Waterfall, C M; Cobb, B D

    2001-12-01

    Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.

  8. Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line

    NASA Astrophysics Data System (ADS)

    Timings, Julian P.; Cole, David J.

    2012-06-01

    A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.

  9. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  10. Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.

    PubMed

    Garnavi, Rahil; Aldeen, Mohammad; Bailey, James

    2012-11-01

    This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.

  11. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  12. Achieving optimal SERS through enhanced experimental design.

    PubMed

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  13. Optimisation of decentralisation for effective Disaster Risk Reduction (DRR) through the case study of Indonesia

    NASA Astrophysics Data System (ADS)

    Grady, A.; Makarigakis, A.; Gersonius, B.

    2015-09-01

    This paper investigates how to optimise decentralisation for effective disaster risk reduction (DRR) in developing states. There is currently limited literature on empirical analysis of decentralisation for DRR. This paper evaluates decentralised governance for DRR in the case study of Indonesia and provides recommendations for its optimisation. Wider implications are drawn to optimise decentralisation for DRR in developing states more generally. A framework to evaluate the institutional and policy setting was developed which necessitated the use of a gap analysis, desk study and field investigation. Key challenges to decentralised DRR include capacity gaps at lower levels, low compliance with legislation, disconnected policies, issues in communication and coordination and inadequate resourcing. DRR authorities should lead coordination and advocacy on DRR. Sustainable multistakeholder platforms and civil society organisations should fill the capacity gap at lower levels. Dedicated and regulated resources for DRR should be compulsory.

  14. Dietary changes needed to reach nutritional adequacy without increasing diet cost according to income: An analysis among French adults.

    PubMed

    Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole

    2017-01-01

    To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006-2007. Diet cost was estimated from mean national food prices (2006-2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d.

  15. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  16. The 5C Concept and 5S Principles in Inflammatory Bowel Disease Management.

    PubMed

    Hibi, Toshifumi; Panaccione, Remo; Katafuchi, Miiko; Yokoyama, Kaoru; Watanabe, Kenji; Matsui, Toshiyuki; Matsumoto, Takayuki; Travis, Simon; Suzuki, Yasuo

    2017-10-27

    The international Inflammatory Bowel Disease [IBD] Expert Alliance initiative [2012-2015] served as a platform to define and support areas of best practice in IBD management to help improve outcomes for all patients with IBD. During the programme, IBD specialists from around the world established by consensus two best practice charters: the 5S Principles and the 5C Concept. The 5S Principles were conceived to provide health care providers with key guidance for improving clinical practice based on best management approaches. They comprise the following categories: Stage the disease; Stratify patients; Set treatment goals; Select appropriate treatment; and Supervise therapy. Optimised management of patients with IBD based on the 5S Principles can be achieved most effectively within an optimised clinical care environment. Guidance on optimising the clinical care setting in IBD management is provided through the 5C Concept, which encompasses: Comprehensive IBD care; Collaboration; Communication; Clinical nurse specialists; and Care pathways. Together, the 5C Concept and 5S Principles provide structured recommendations on organising the clinical care setting and developing best-practice approaches in IBD management. Consideration and application of these two dimensions could help health care providers optimise their IBD centres and collaborate more effectively with their multidisciplinary team colleagues and patients, to provide improved IBD care in daily clinical practice. Ultimately, this could lead to improved outcomes for patients with IBD. Copyright © 2017 European Crohn’s and Colitis Organisation (ECCO). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com

  17. Hailstone classifier based on Rough Set Theory

    NASA Astrophysics Data System (ADS)

    Wan, Huisong; Jiang, Shuming; Wei, Zhiqiang; Li, Jian; Li, Fengjiao

    2017-09-01

    The Rough Set Theory was used for the construction of the hailstone classifier. Firstly, the database of the radar image feature was constructed. It included transforming the base data reflected by the Doppler radar into the bitmap format which can be seen. Then through the image processing, the color, texture, shape and other dimensional features should be extracted and saved as the characteristic database to provide data support for the follow-up work. Secondly, Through the Rough Set Theory, a machine for hailstone classifications can be built to achieve the hailstone samples’ auto-classification.

  18. Uncertainty Modeling for Database Design using Intuitionistic and Rough Set Theory

    DTIC Science & Technology

    2009-01-01

    Definition. An intuitionistic rough relation R is a sub- set of the set cross product P(D1)× P(D2) × · · ·× P( Dm )× Dµ.× Dv. For a specific relation, R...that aj ∈ dij for all j. The interpretation space is the cross product D1× D2 × · · ·× Dm × Dµ× Dv but is limited for a given re- lation R to the set...systems, Journal of Information Science 11 (1985), 77–87. [7] T. Beaubouef and F. Petry, Rough Querying of Crisp Data in Relational Databases, Third

  19. Impact of Surface Roughness and Soil Texture on Mineral Dust Emission Fluxes Modeling

    NASA Technical Reports Server (NTRS)

    Menut, Laurent; Perez, Carlos; Haustein, Karsten; Bessagnet, Bertrand; Prigent, Catherine; Alfaro, Stephane

    2013-01-01

    Dust production models (DPM) used to estimate vertical fluxes of mineral dust aerosols over arid regions need accurate data on soil and surface properties. The Laboratoire Inter-Universitaire des Systemes Atmospheriques (LISA) data set was developed for Northern Africa, the Middle East, and East Asia. This regional data set was built through dedicated field campaigns and include, among others, the aerodynamic roughness length, the smooth roughness length of the erodible fraction of the surface, and the dry (undisturbed) soil size distribution. Recently, satellite-derived roughness length and high-resolution soil texture data sets at the global scale have emerged and provide the opportunity for the use of advanced schemes in global models. This paper analyzes the behavior of the ERS satellite-derived global roughness length and the State Soil Geographic data base-Food and Agriculture Organization of the United Nations (STATSGO-FAO) soil texture data set (based on wet techniques) using an advanced DPM in comparison to the LISA data set over Northern Africa and the Middle East. We explore the sensitivity of the drag partition scheme (a critical component of the DPM) and of the dust vertical fluxes (intensity and spatial patterns) to the roughness length and soil texture data sets. We also compare the use of the drag partition scheme to a widely used preferential source approach in global models. Idealized experiments with prescribed wind speeds show that the ERS and STATSGO-FAO data sets provide realistic spatial patterns of dust emission and friction velocity thresholds in the region. Finally, we evaluate a dust transport model for the period of March to July 2011 with observed aerosol optical depths from Aerosol Robotic Network sites. Results show that ERS and STATSGO-FAO provide realistic simulations in the region.

  20. The Research of Tax Text Categorization based on Rough Set

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Xu, Guang; Xu, Qian; Zhang, Nan

    To solve the problem of effective of categorization of text data in taxation system, the paper analyses the text data and the size calculation of key issues first, then designs text categorization based on rough set model.

  1. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  2. Optimal coordinated voltage control in active distribution networks using backtracking search algorithm

    PubMed Central

    Tengku Hashim, Tengku Juhana; Mohamed, Azah

    2017-01-01

    The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919

  3. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Optimal coordinated voltage control in active distribution networks using backtracking search algorithm.

    PubMed

    Tengku Hashim, Tengku Juhana; Mohamed, Azah

    2017-01-01

    The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.

  5. Correlation of bond strength with surface roughness using a new roughness measurement technique.

    PubMed

    Winkler, M M; Moore, B K

    1994-07-01

    The correlation between shear bond strength and surface roughness was investigated using new surface measurement methods. Bonding agents and associated resin composites were applied to set amalgam after mechanically roughening its surface. Surface treatments were noe (as set against glass), 80 grit, and 600 grit abrasive paper. Surface roughness (R(a) as measured parallel and perpendicular (+) to the direction of the polishing scratches and true profile length were measured. A knife-edge was applied (rate = 2.54 mm/min) at the bonding agent/amalgam interface of each sample until failure. Coefficients of determination for mean bond strength vs either roughness (R(a), of profile length were significantly higher for measurements in parallel directions than for those measurements in (+) directions. The shear bond strength to set amalgam for a PENTA-containing adhesives system (L.D. Caulk Division) was not significantly different from that of a PENTA-free adhesive (3M Dental Products Division), even though PENTA has been reported to increase bond strength to nonprecious metals. The shear bond strength of resin composite to amalgam is correlated to surface roughness when it is measured parallel to the polishing scratches. This correlation is significantly lower when surface roughness is measured in the typical manner, perpendicular to the polishing scratches.

  6. Detailed systematic analysis of recruitment strategies in randomised controlled trials in patients with an unscheduled admission to hospital

    PubMed Central

    Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M

    2018-01-01

    Objectives To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Design Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Results Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more ‘real’ RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. Conclusion There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. PMID:29420230

  7. Detailed systematic analysis of recruitment strategies in randomised controlled trials in patients with an unscheduled admission to hospital.

    PubMed

    Rowlands, Ceri; Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M

    2018-02-02

    To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more 'real' RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    EPA Science Inventory

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  9. Multi-objective thermodynamic optimisation of supercritical CO2 Brayton cycles integrated with solar central receivers

    NASA Astrophysics Data System (ADS)

    Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes

    2018-01-01

    In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.

  10. A rough set approach for determining weights of decision makers in group decision making.

    PubMed

    Yang, Qiang; Du, Ping-An; Wang, Yong; Liang, Bin

    2017-01-01

    This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs' decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member' decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs' evaluations and selections.

  11. Measuring uncertainty by extracting fuzzy rules using rough sets and extracting fuzzy rules under uncertainty and measuring definability using rough sets

    NASA Technical Reports Server (NTRS)

    Worm, Jeffrey A.; Culas, Donald E.

    1991-01-01

    Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. This paper examines the concepts of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to provide the possible optimal solution. By incorporating principles from these theories, a decision-making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much we believe these rules is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of its fuzzy attributes is studied.

  12. The influence of surface roughness of deserts on the July circulation - A numerical study

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Smith, W. E.

    1985-01-01

    The effect of the low surface roughness characteristics of deserts on atmospheric circulation in July is examined using numerical simulations with the GCM of the Goddard Laboratory for Atmospheric Science (GLAS). Identical sets of simulations were carried out with the model starting from the initial state of the atmosphere on June 15, for the years 1979 and 1980. The first simulation included a surface roughness factor of 45 cm, and the second set had a surface roughness factor of 0.02 cm for desert regions, and 45 cm for all other land. A comparative analysis of the numerical data was carried out in order to study the variations for the desert regions. It is shown that rainfall in the Sahara desert was reduced significantly in the data set with the nonuniform surface roughness factor in comparison with the other data set. The inter-tropical convergence zone (ITCZ) moved southward to about 15 degrees, which was close to its observed location at about 10 degrees N. In other deserts, the North American Great Plains, Rajputana in India, and the Central Asian desert, no similar changes were observed. Detailed contour maps of the weather conditions in the different desert regions are provided.

  13. Priority of road maintenance management based on halda reading range on NAASRA method

    NASA Astrophysics Data System (ADS)

    Surbakti, M.; Doan, A.

    2018-02-01

    The road pavement, constantly experiencing stress-strain due to traffic load through it, can cause damage to the pavement. Therefore, early detection and repair of the damage will be able to prevent more severe damage that can develop into pavement failure. A road condition survey is one of the earliest attempts to detect the initial damage of a pavement. In this case the driving comfort is the most important part for the driver in assessing road conditions that are affected by the level of road surface roughness. To determine the level of roughness of the road, one of the methods developed is the measurement using the NAASRA method. In this method the roughness of the road is an accumulation of the average unevenness of the road, with the general setting on halda of 100 m. However, with this 100-meter setting, in some places the final value of the roughness value is too large or too small so that it will result in the priority of the road maintenance. This is what underlies roughness research by comparing halda settings at 50 m and 200 m different from the general settings above.This study uses the International Roughness Index (IRI) method in determining the level of road stability concerning driving discomfort. IRI score obtained from direct survey in field by using Roughometer-NAASRA.The final result shows that there is a significant difference between the reading of halda which is set at 100 m reading with halda set with 50 and 200 meter readings. This may lead to differences in handling priorities, which may impact on the sustainability of road network maintenance management (Sustainaible Road Management)

  14. Dietary changes needed to reach nutritional adequacy without increasing diet cost according to income: An analysis among French adults

    PubMed Central

    Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole

    2017-01-01

    Objective To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Materials and methods Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006–2007. Diet cost was estimated from mean national food prices (2006–2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. Results The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. Conclusions In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d. PMID:28358837

  15. SASS Applied to Optimum Work Roll Profile Selection in the Hot Rolling of Wide Steel

    NASA Astrophysics Data System (ADS)

    Nolle, Lars

    The quality of steel strip produced in a wide strip rolling mill depends heavily on the careful selection of initial ground work roll profiles for each of the mill stands in the finishing train. In the past, these profiles were determined by human experts, based on their knowledge and experience. In previous work, the profiles were successfully optimised using a self-organising migration algorithm (SOMA). In this research, SASS, a novel heuristic optimisation algorithm that has only one control parameter, has been used to find the optimum profiles for a simulated rolling mill. The resulting strip quality produced using the profiles found by SASS is compared with results from previous work and the quality produced using the original profile specifications. The best set of profiles found by SASS clearly outperformed the original set and performed equally well as SOMA without the need of finding a suitable set of control parameters.

  16. Towards optimal experimental tests on the reality of the quantum state

    NASA Astrophysics Data System (ADS)

    Knee, George C.

    2017-02-01

    The Barrett-Cavalcanti-Lal-Maroney (BCLM) argument stands as the most effective means of demonstrating the reality of the quantum state. Its advantages include being derived from very few assumptions, and a robustness to experimental error. Finding the best way to implement the argument experimentally is an open problem, however, and involves cleverly choosing sets of states and measurements. I show that techniques from convex optimisation theory can be leveraged to numerically search for these sets, which then form a recipe for experiments that allow for the strongest statements about the ontology of the wavefunction to be made. The optimisation approach presented is versatile, efficient and can take account of the finite errors present in any real experiment. I find significantly improved low-cardinality sets which are guaranteed partially optimal for a BCLM test in low Hilbert space dimension. I further show that mixed states can be more optimal than pure states.

  17. Midbond basis functions for weakly bound complexes

    NASA Astrophysics Data System (ADS)

    Shaw, Robert A.; Hill, J. Grant

    2018-06-01

    Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.

  18. Gene selection for tumor classification using neighborhood rough sets and entropy measures.

    PubMed

    Chen, Yumin; Zhang, Zunjun; Zheng, Jianzhong; Ma, Ying; Xue, Yu

    2017-03-01

    With the development of bioinformatics, tumor classification from gene expression data becomes an important useful technology for cancer diagnosis. Since a gene expression data often contains thousands of genes and a small number of samples, gene selection from gene expression data becomes a key step for tumor classification. Attribute reduction of rough sets has been successfully applied to gene selection field, as it has the characters of data driving and requiring no additional information. However, traditional rough set method deals with discrete data only. As for the gene expression data containing real-value or noisy data, they are usually employed by a discrete preprocessing, which may result in poor classification accuracy. In this paper, we propose a novel gene selection method based on the neighborhood rough set model, which has the ability of dealing with real-value data whilst maintaining the original gene classification information. Moreover, this paper addresses an entropy measure under the frame of neighborhood rough sets for tackling the uncertainty and noisy of gene expression data. The utilization of this measure can bring about a discovery of compact gene subsets. Finally, a gene selection algorithm is designed based on neighborhood granules and the entropy measure. Some experiments on two gene expression data show that the proposed gene selection is an effective method for improving the accuracy of tumor classification. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Standard Sizes for Rough-Dimension Exports to Europe and Japan

    Treesearch

    Philip A. Araman

    1987-01-01

    In this article, European and Japanese standard-sized rough dimension products are described, and their apparent sizes are listed. One set of proposed standard sizes of rough dimension that could be manufactured in the United States for these markets is presented. Also, the benefits of the production and sale of standard sizes of export rough dimension are highlighted...

  20. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-12-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  1. Spectral Analysis and Experimental Modeling of Ice Accretion Roughness

    NASA Technical Reports Server (NTRS)

    Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.

    1996-01-01

    A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.

  2. Optimised analytical models of the dielectric properties of biological tissue.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin

    2017-05-01

    The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    PubMed

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  4. The Tungsten Inert GAS (TIG) Process of Welding Aluminium in Microgravity: Technical and Economic Considerations

    NASA Astrophysics Data System (ADS)

    Ferretti, S.; Amadori, K.; Boccalatte, A.; Alessandrini, M.; Freddi, A.; Persiani, F.; Poli, G.

    2002-01-01

    The UNIBO team composed of students and professors of the University of Bologna along with technicians and engineers from Alenia Space Division and Siad Italargon Division, took part in the 3rd Student Parabolic Flight Campaign of the European Space Agency in 2000. It won the student competition and went on to take part in the Professional Parabolic Flight Campaign of May 2001. The experiment focused on "dendritic growth in aluminium alloy weldings", and investigated topics related to the welding process of aluminium in microgravity. The purpose of the research is to optimise the process and to define the areas of interest that could be improved by new conceptual designs. The team performed accurate tests in microgravity to determine which phenomena have the greatest impact on the quality of the weldings with respect to penetration, surface roughness and the microstructures that are formed during the solidification. Various parameters were considered in the economic-technical optimisation, such as the type of electrode and its tip angle. Ground and space tests have determined the optimum chemical composition of the electrodes to offer longest life while maintaining the shape of the point. Additionally, the power consumption has been optimised; this offers opportunities for promoting the product to the customer as well as being environmentally friendly. Tests performed on the Al-Li alloys showed a significant influence of some physical phenomena such as the Marangoni effect and thermal diffusion; predictions have been made on the basis of observations of the thermal flux seen in the stereophotos. Space transportation today is a key element in the construction of space stations and future planetary bases, because the volumes available for launch to space are directly related to the payload capacity of rockets or the Space Shuttle. The research performed gives engineers the opportunity to consider completely new concepts for designing structures for space applications. In fact, once the optimised parameters are defined for welding in space, it could be possible to weld different parts directly in orbit to obtain much larger sizes and volumes, for example for space tourism habitation modules. The second relevant aspect is technology transfer obtained by the optimisation of the TIG process on aluminium which is often used in the automotive industry as well as in mass production markets.

  5. A rough set approach for determining weights of decision makers in group decision making

    PubMed Central

    Yang, Qiang; Du, Ping-an; Wang, Yong; Liang, Bin

    2017-01-01

    This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs’ decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member’ decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs’ evaluations and selections. PMID:28234974

  6. Heat Transfer Measurements on Surfaces with Natural Ice Castings and Modeled Roughness

    NASA Technical Reports Server (NTRS)

    Breuer, Kenneth S.; Torres, Benjamin E.; Orr, D. J.; Hansman, R. John

    1997-01-01

    An experimental method is described to measure and compare the convective heat transfer coefficient of natural and simulated ice accretion roughness and to provide a rational means for determining accretion-related enhanced heat transfer coefficients. The natural ice accretion roughness was a sample casting made from accretions at the NASA Lewis Icing Research Tunnel (IRT). One of these castings was modeled using a Spectral Estimation Technique (SET) to produce three roughness elements patterns that simulate the actual accretion. All four samples were tested in a flat-plate boundary layer at angle of attack in a "dry" wind tunnel test. The convective heat transfer coefficient was measured using infrared thermography. It is shown that, dispite some problems in the current data set, the method does show considerable promise in determining roughness-induced heat transfer coefficients, and that, in addition to the roughness height and spacing in the flow direction, the concentration and spacing of elements in the spanwise direction are important parameters.

  7. A computational intelligent approach to multi-factor analysis of violent crime information system

    NASA Astrophysics Data System (ADS)

    Liu, Hongbo; Yang, Chao; Zhang, Meng; McLoone, Seán; Sun, Yeqing

    2017-02-01

    Various scientific studies have explored the causes of violent behaviour from different perspectives, with psychological tests, in particular, applied to the analysis of crime factors. The relationship between bi-factors has also been extensively studied including the link between age and crime. In reality, many factors interact to contribute to criminal behaviour and as such there is a need to have a greater level of insight into its complex nature. In this article we analyse violent crime information systems containing data on psychological, environmental and genetic factors. Our approach combines elements of rough set theory with fuzzy logic and particle swarm optimisation to yield an algorithm and methodology that can effectively extract multi-knowledge from information systems. The experimental results show that our approach outperforms alternative genetic algorithm and dynamic reduct-based techniques for reduct identification and has the added advantage of identifying multiple reducts and hence multi-knowledge (rules). Identified rules are consistent with classical statistical analysis of violent crime data and also reveal new insights into the interaction between several factors. As such, the results are helpful in improving our understanding of the factors contributing to violent crime and in highlighting the existence of hidden and intangible relationships between crime factors.

  8. Distributed learning and multi-objectivity in traffic light control

    NASA Astrophysics Data System (ADS)

    Brys, Tim; Pham, Tong T.; Taylor, Matthew E.

    2014-01-01

    Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against standard isolated traffic actuated signals. We analyse how learning and coordination behave under different traffic conditions, and discuss the multi-objective nature of the problem. Finally we evaluate several alternative reward signals in the best performing approach, some of these taking advantage of the correlation between the problem-inherent objectives to improve performance.

  9. Prediction of road traffic death rate using neural networks optimised by genetic algorithm.

    PubMed

    Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari

    2015-01-01

    Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.

  10. Markovian queue optimisation analysis with an unreliable server subject to working breakdowns and impatient customers

    NASA Astrophysics Data System (ADS)

    Liou, Cheng-Dar

    2015-09-01

    This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.

  11. Joint optimisation of arbitrage profits and battery life degradation for grid storage application of battery electric vehicles

    NASA Astrophysics Data System (ADS)

    Kies, Alexander

    2018-02-01

    To meet European decarbonisation targets by 2050, the electrification of the transport sector is mandatory. Most electric vehicles rely on lithium-ion batteries, because they have a higher energy/power density and longer life span compared to other practical batteries such as zinc-carbon batteries. Electric vehicles can thus provide energy storage to support the system integration of generation from highly variable renewable sources, such as wind and photovoltaics (PV). However, charging/discharging causes batteries to degradate progressively with reduced capacity. In this study, we investigate the impact of the joint optimisation of arbitrage revenue and battery degradation of electric vehicle batteries in a simplified setting, where historical prices allow for market participation of battery electric vehicle owners. It is shown that the joint optimisation of both leads to stronger gains then the sum of both optimisation strategies and that including battery degradation into the model avoids state of charges close to the maximum at times. It can be concluded that degradation is an important aspect to consider in power system models, which incorporate any kind of lithium-ion battery storage.

  12. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation

    PubMed Central

    Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D

    2017-01-01

    Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852

  13. Characterisation and optimisation of flexible transfer lines for liquid helium. Part I: Experimental results

    NASA Astrophysics Data System (ADS)

    Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.

    2016-04-01

    The transfer of liquid helium (LHe) into mobile dewars or transport vessels is a common and unavoidable process at LHe decant stations. During this transfer reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus generated helium gas needs to be collected and reliquefied which requires a huge amount of electrical energy. Therefore, the design of transfer lines used at LHe decant stations has been optimised to establish a LHe transfer with minor evaporation losses which increases the overall efficiency and capacity of LHe decant stations. This paper presents the experimental results achieved during the thermohydraulic optimisation of a flexible LHe transfer line. An extensive measurement campaign with a set of dedicated transfer lines equipped with pressure and temperature sensors led to unique experimental data of this specific transfer process. The experimental results cover the heat leak, the pressure drop, the transfer rate, the outlet quality, and the cool-down and warm-up behaviour of the examined transfer lines. Based on the obtained results the design of the considered flexible transfer line has been optimised, featuring reduced heat leak and pressure drop.

  14. Design of a prototype flow microreactor for synthetic biology in vitro.

    PubMed

    Boehm, Christian R; Freemont, Paul S; Ces, Oscar

    2013-09-07

    As a reference platform for in vitro synthetic biology, we have developed a prototype flow microreactor for enzymatic biosynthesis. We report the design, implementation, and computer-aided optimisation of a three-step model pathway within a microfluidic reactor. A packed bed format was shown to be optimal for enzyme compartmentalisation after experimental evaluation of several approaches. The specific substrate conversion efficiency could significantly be improved by an optimised parameter set obtained by computational modelling. Our microreactor design provides a platform to explore new in vitro synthetic biology solutions for industrial biosynthesis.

  15. Performance review of the ROMI-RIP rough mill simulator

    Treesearch

    Edward Thomas; Urs Buehlmann

    2003-01-01

    The USDA Forest Service's ROMI-RIP version 2.0 (RR2) rough mill rip-first simulation program was validated in a recent study. The validation study found that when RR2 was set to search for optimum yield without considering actual rough mill strip solutions, it produced yields that were as much as 7 percent higher (71.1% versus 64.0%) than the actual rough mill....

  16. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    NASA Astrophysics Data System (ADS)

    Helle, K. B.; Müller, T. O.; Astrup, P.; Dyve, J. E.

    2014-05-01

    Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64 source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results. The DOT runs on a server and can be accessed via common web browsers; it can also be installed locally.

  17. In the Context of Multiple Intelligences Theory, Intelligent Data Analysis of Learning Styles Was Based on Rough Set Theory

    ERIC Educational Resources Information Center

    Narli, Serkan; Ozgen, Kemal; Alkan, Huseyin

    2011-01-01

    The present study aims to identify the relationship between individuals' multiple intelligence areas and their learning styles with mathematical clarity using the concept of rough sets which is used in areas such as artificial intelligence, data reduction, discovery of dependencies, prediction of data significance, and generating decision…

  18. A Mathematical Approach in Evaluating Biotechnology Attitude Scale: Rough Set Data Analysis

    ERIC Educational Resources Information Center

    Narli, Serkan; Sinan, Olcay

    2011-01-01

    Individuals' thoughts and attitudes towards biotechnology have been investigated in many countries. A Likert-type scale is the most commonly used scale to measure attitude. However, the weak side of a likert-type scale is that different responses may produce the same score. The Rough set method has been regarded to address this shortcoming. A…

  19. An integrated modelling and multicriteria analysis approach to managing nitrate diffuse pollution: 2. A case study for a chalk catchment in England.

    PubMed

    Koo, B K; O'Connell, P E

    2006-04-01

    The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.

  20. Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.

    PubMed

    Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K

    2012-06-01

    Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Dynamic least-cost optimisation of wastewater system remedial works requirements.

    PubMed

    Vojinovic, Z; Solomatine, D; Price, R K

    2006-01-01

    In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.

  2. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics

    PubMed Central

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics. PMID:26295151

  3. Rough Play: One of the Most Challenging Behaviors

    ERIC Educational Resources Information Center

    Carlson, Frances M.

    2011-01-01

    Most children engage in rough play, and research demonstrates its physical, social, emotional, and cognitive value. Early childhood education settings have the responsibility to provide children with what best serves their developmental needs. One of the best ways teachers can support rough play is by modeling it for children. When adults model…

  4. Hybrid real-code ant colony optimisation for constrained mechanical design

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  5. Dwell time-based stabilisation of switched delay systems using free-weighting matrices

    NASA Astrophysics Data System (ADS)

    Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay

    2018-01-01

    In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.

  6. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  7. Surface roughness measurement in the submicrometer range using laser scattering

    NASA Astrophysics Data System (ADS)

    Wang, S. H.; Quan, Chenggen; Tay, C. J.; Shang, H. M.

    2000-06-01

    A technique for measuring surface roughness in the submicrometer range is developed. The principle of the method is based on laser scattering from a rough surface. A telecentric optical setup that uses a laser diode as a light source is used to record the light field scattered from the surface of a rough object. The light intensity distribution of the scattered band, which is correlated to the surface roughness, is recorded by a linear photodiode array and analyzed using a single-chip microcomputer. Several sets of test surfaces prepared by different machining processes are measured and a method for the evaluation of surface roughness is proposed.

  8. An intelligent knowledge mining model for kidney cancer using rough set theory.

    PubMed

    Durai, M A Saleem; Acharjya, D P; Kannan, A; Iyengar, N Ch Sriman Narayana

    2012-01-01

    Medical diagnosis processes vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases themselves. Rough set approach has two major advantages over the other methods. First, it can handle different types of data such as categorical, numerical etc. Secondly, it does not make any assumption like probability distribution function in stochastic modeling or membership grade function in fuzzy set theory. It involves pattern recognition through logical computational rules rather than approximating them through smooth mathematical functional forms. In this paper we use rough set theory as a data mining tool to derive useful patterns and rules for kidney cancer faulty diagnosis. In particular, the historical data of twenty five research hospitals and medical college is used for validation and the results show the practical viability of the proposed approach.

  9. Development of a Mobile-Optimised Website to Support Students with Special Needs Transitioning from Primary to Secondary Settings

    ERIC Educational Resources Information Center

    Chambers, Dianne; Coffey, Anne

    2013-01-01

    With an increasing number of students with special needs being included in regular classroom environments, consideration of, and planning for, a smooth transition between different school settings is important for parents, classroom teachers and school administrators. The transition between primary and secondary school can be difficult for…

  10. Subject Specific Optimisation of the Stiffness of Footwear Material for Maximum Plantar Pressure Reduction.

    PubMed

    Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan

    2017-08-01

    Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.

  11. Orbital optimisation in the perfect pairing hierarchy: applications to full-valence calculations on linear polyacenes

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2018-03-01

    We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.

  12. An improved PSO-SVM model for online recognition defects in eddy current testing

    NASA Astrophysics Data System (ADS)

    Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin

    2013-12-01

    Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.

  13. Demonstrating the suitability of genetic algorithms for driving microbial ecosystems in desirable directions.

    PubMed

    Vandecasteele, Frederik P J; Hess, Thomas F; Crawford, Ronald L

    2007-07-01

    The functioning of natural microbial ecosystems is determined by biotic interactions, which are in turn influenced by abiotic environmental conditions. Direct experimental manipulation of such conditions can be used to purposefully drive ecosystems toward exhibiting desirable functions. When a set of environmental conditions can be manipulated to be present at a discrete number of levels, finding the right combination of conditions to obtain the optimal desired effect becomes a typical combinatorial optimisation problem. Genetic algorithms are a class of robust and flexible search and optimisation techniques from the field of computer science that may be very suitable for such a task. To verify this idea, datasets containing growth levels of the total microbial community of four different natural microbial ecosystems in response to all possible combinations of a set of five chemical supplements were obtained. Subsequently, the ability of a genetic algorithm to search this parameter space for combinations of supplements driving the microbial communities to high levels of growth was compared to that of a random search, a local search, and a hill-climbing algorithm, three intuitive alternative optimisation approaches. The results indicate that a genetic algorithm is very suitable for driving microbial ecosystems in desirable directions, which opens opportunities for both fundamental ecological research and industrial applications.

  14. Effect of slurry composition on the chemical mechanical polishing of thin diamond films

    PubMed Central

    Werrell, Jessica M.; Mandal, Soumen; Thomas, Evan L. H.; Brousseau, Emmanuel B.; Lewis, Ryan; Borri, Paola; Davies, Philip R.; Williams, Oliver A.

    2017-01-01

    Nanocrystalline diamond (NCD) thin films grown by chemical vapour deposition have an intrinsic surface roughness, which hinders the development and performance of the films’ various applications. Traditional methods of diamond polishing are not effective on NCD thin films. Films either shatter due to the combination of wafer bow and high mechanical pressures or produce uneven surfaces, which has led to the adaptation of the chemical mechanical polishing (CMP) technique for NCD films. This process is poorly understood and in need of optimisation. To compare the effect of slurry composition and pH upon polishing rates, a series of NCD thin films have been polished for three hours using a Logitech Ltd. Tribo CMP System in conjunction with a polyester/polyurethane polishing cloth and six different slurries. The reduction in surface roughness was measured hourly using an atomic force microscope. The final surface chemistry was examined using X-ray photoelectron spectroscopy and a scanning electron microscope. It was found that of all the various properties of the slurries, including pH and composition, the particle size was the determining factor for the polishing rate. The smaller particles polishing at a greater rate than the larger ones. PMID:29057022

  15. Effect of slurry composition on the chemical mechanical polishing of thin diamond films

    NASA Astrophysics Data System (ADS)

    Werrell, Jessica M.; Mandal, Soumen; Thomas, Evan L. H.; Brousseau, Emmanuel B.; Lewis, Ryan; Borri, Paola; Davies, Philip R.; Williams, Oliver A.

    2017-12-01

    Nanocrystalline diamond (NCD) thin films grown by chemical vapour deposition have an intrinsic surface roughness, which hinders the development and performance of the films' various applications. Traditional methods of diamond polishing are not effective on NCD thin films. Films either shatter due to the combination of wafer bow and high mechanical pressures or produce uneven surfaces, which has led to the adaptation of the chemical mechanical polishing (CMP) technique for NCD films. This process is poorly understood and in need of optimisation. To compare the effect of slurry composition and pH upon polishing rates, a series of NCD thin films have been polished for three hours using a Logitech Ltd. Tribo CMP System in conjunction with a polyester/polyurethane polishing cloth and six different slurries. The reduction in surface roughness was measured hourly using an atomic force microscope. The final surface chemistry was examined using X-ray photoelectron spectroscopy and a scanning electron microscope. It was found that of all the various properties of the slurries, including pH and composition, the particle size was the determining factor for the polishing rate. The smaller particles polishing at a greater rate than the larger ones.

  16. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  17. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  18. Surface roughness of composite resin veneer after application of herbal and non-herbal toothpaste

    NASA Astrophysics Data System (ADS)

    Nuraini, S.; Herda, E.; Irawan, B.

    2017-08-01

    The aim of this study was to find out the surface roughness of composite resin veneer after brushing. In this study, 24 specimens of composite resin veneer are divided into three subgroups: brushed without toothpaste, brushed with non-herbal toothpaste, and brushed with herbal toothpaste. Brushing was performed for one set of 5,000 strokes and continued for a second set of 5,000 strokes. Roughness of composite resin veneer was determined using a Surface Roughness Tester. The results were statistically analyzed using Kruskal-Wallis nonparametric test and Post Hoc Mann-Whitney. The results indicate that the highest difference among the Ra values occurred within the subgroup that was brushed with the herbal toothpaste. In conclusion, the herbal toothpaste produced a rougher surface on composite resin veneer compared to non-herbal toothpaste.

  19. Characterization of Ice Roughness From Simulated Icing Encounters

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Shin, Jaiwon

    1997-01-01

    Detailed measurements of the size of roughness elements on ice accreted on models in the NASA Lewis Icing Research Tunnel (IRT) were made in a previous study. Only limited data from that study have been published, but included were the roughness element height, diameter and spacing. In the present study, the height and spacing data were found to correlate with the element diameter, and the diameter was found to be a function primarily of the non-dimensional parameters freezing fraction and accumulation parameter. The width of the smooth zone which forms at the leading edge of the model was found to decrease with increasing accumulation parameter. Although preliminary, the success of these correlations suggests that it may be possible to develop simple relationships between ice roughness and icing conditions for use in ice-accretion-prediction codes. These codes now require an ice-roughness estimate to determine convective heat transfer. Studies using a 7.6-cm-diameter cylinder and a 53.3-cm-chord NACA 0012 airfoil were also performed in which a 1/2-min icing spray at an initial set of conditions was followed by a 9-1/2-min spray at a second set of conditions. The resulting ice shape was compared with that from a full 10-min spray at the second set of conditions. The initial ice accumulation appeared to have no effect on the final ice shape. From this result, it would appear the accreting ice is affected very little by the initial roughness or shape features.

  20. Can We Make Definite Categorization of Student Attitudes? A Rough Set Approach to Investigate Students' Implicit Attitudinal Typologies toward Living Things

    ERIC Educational Resources Information Center

    Narli, Serkan; Yorek, Nurettin; Sahin, Mehmet; Usak, Muhammet

    2010-01-01

    This study investigates the possibility of analyzing educational data using the theory of rough sets which is mostly employed in the fields of data analysis and data mining. Data were collected using an open-ended conceptual understanding test of the living things administered to first-year high school students. The responses of randomly selected…

  1. BIANCA (Brain Intensity AbNormality Classification Algorithm): A new tool for automated segmentation of white matter hyperintensities.

    PubMed

    Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark

    2016-11-01

    Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Optimisation of solar synoptic observations

    NASA Astrophysics Data System (ADS)

    Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal

    2012-09-01

    The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.

  3. Setting up a proper power spectral density (PSD) and autocorrelation analysis for material and process characterization

    NASA Astrophysics Data System (ADS)

    Rutigliani, Vito; Lorusso, Gian Francesco; De Simone, Danilo; Lazzarino, Frederic; Rispens, Gijsbert; Papavieros, George; Gogolides, Evangelos; Constantoudis, Vassilios; Mack, Chris A.

    2018-03-01

    Power spectral density (PSD) analysis is playing more and more a critical role in the understanding of line-edge roughness (LER) and linewidth roughness (LWR) in a variety of applications across the industry. It is an essential step to get an unbiased LWR estimate, as well as an extremely useful tool for process and material characterization. However, PSD estimate can be affected by both random to systematic artifacts caused by image acquisition and measurement settings, which could irremediably alter its information content. In this paper, we report on the impact of various setting parameters (smoothing image processing filters, pixel size, and SEM noise levels) on the PSD estimate. We discuss also the use of PSD analysis tool in a variety of cases. Looking beyond the basic roughness estimate, we use PSD and autocorrelation analysis to characterize resist blur[1], as well as low and high frequency roughness contents and we apply this technique to guide the EUV material stack selection. Our results clearly indicate that, if properly used, PSD methodology is a very sensitive tool to investigate material and process variations

  4. Rough set approach for accident chains exploration.

    PubMed

    Wong, Jinn-Tsai; Chung, Yi-Shih

    2007-05-01

    This paper presents a novel non-parametric methodology--rough set theory--for accident occurrence exploration. The rough set theory allows researchers to analyze accidents in multiple dimensions and to model accident occurrence as factor chains. Factor chains are composed of driver characteristics, trip characteristics, driver behavior and environment factors that imply typical accident occurrence. A real-world database (2003 Taiwan single auto-vehicle accidents) is used as an example to demonstrate the proposed approach. The results show that although most accident patterns are unique, some accident patterns are significant and worth noting. Student drivers who are young and less experienced exhibit a relatively high possibility of being involved in off-road accidents on roads with a speed limit between 51 and 79 km/h under normal driving circumstances. Notably, for bump-into-facility accidents, wet surface is a distinctive environmental factor.

  5. Convection from Hemispherical and Conical Model Ice Roughness Elements in Stagnation Region Flows

    NASA Technical Reports Server (NTRS)

    Hughes, Michael T.; Shannon, Timothy A.; McClain, Stephen T.; Vargas, Mario; Broeren, Andy

    2016-01-01

    To improve ice accretion prediction codes, more data regarding ice roughness and its effects on convective heat transfer are required. The Vertical Icing Studies Tunnel (VIST) at NASA Glenn Research was used to model realistic ice roughness in the stagnation region of a NACA 0012 airfoil. In the VIST, a test plate representing the leading 2% chord of the airfoil was subjected to flows of 7.62 m/s (25 ft/s), 12.19 m/s (40 ft/s), and 16.76 m/s (55 ft/s). The test plate was fitted with multiple surfaces or sets of roughness panels, each with a different representation of ice roughness. The sets of roughness panels were constructed using two element distribution patterns that were created based on a laser scan of an iced airfoil acquired in the Icing Research Tunnel at NASA Glenn. For both roughness patterns, surfaces were constructed using plastic hemispherical elements, plastic conical elements, and aluminum conical elements. Infrared surface thermometry data from tests run in the VIST were used to calculate area averaged heat transfer coefficient values. The values from the roughness surfaces were compared to the smooth control surface, showing convective enhancement as high as 400% in some cases. The data gathered during this study will ultimately be used to improve the physical modeling in LEWICE or other ice accretion codes and produce predictions of in-flight ice accretion on aircraft surfaces with greater confidence.

  6. a Rough Set Decision Tree Based Mlp-Cnn for Very High Resolution Remotely Sensed Image Classification

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.

    2017-09-01

    Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  7. Modeling Mode Choice Behavior Incorporating Household and Individual Sociodemographics and Travel Attributes Based on Rough Sets Theory

    PubMed Central

    Chen, Xuewu; Wei, Ming; Wu, Jingxian; Hou, Xianyao

    2014-01-01

    Most traditional mode choice models are based on the principle of random utility maximization derived from econometric theory. Alternatively, mode choice modeling can be regarded as a pattern recognition problem reflected from the explanatory variables of determining the choices between alternatives. The paper applies the knowledge discovery technique of rough sets theory to model travel mode choices incorporating household and individual sociodemographics and travel information, and to identify the significance of each attribute. The study uses the detailed travel diary survey data of Changxing county which contains information on both household and individual travel behaviors for model estimation and evaluation. The knowledge is presented in the form of easily understood IF-THEN statements or rules which reveal how each attribute influences mode choice behavior. These rules are then used to predict travel mode choices from information held about previously unseen individuals and the classification performance is assessed. The rough sets model shows high robustness and good predictive ability. The most significant condition attributes identified to determine travel mode choices are gender, distance, household annual income, and occupation. Comparative evaluation with the MNL model also proves that the rough sets model gives superior prediction accuracy and coverage on travel mode choice modeling. PMID:25431585

  8. Evaluation and optimisation of preparative semi-automated electrophoresis systems for Illumina library preparation.

    PubMed

    Quail, Michael A; Gu, Yong; Swerdlow, Harold; Mayho, Matthew

    2012-12-01

    Size selection can be a critical step in preparation of next-generation sequencing libraries. Traditional methods employing gel electrophoresis lack reproducibility, are labour intensive, do not scale well and employ hazardous interchelating dyes. In a high-throughput setting, solid-phase reversible immobilisation beads are commonly used for size-selection, but result in quite a broad fragment size range. We have evaluated and optimised the use of two semi-automated preparative DNA electrophoresis systems, the Caliper Labchip XT and the Sage Science Pippin Prep, for size selection of Illumina sequencing libraries. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Measuring skew in average surface roughness as a function of surface preparation

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  10. Fuzzy-Rough Nearest Neighbour Classification

    NASA Astrophysics Data System (ADS)

    Jensen, Richard; Cornelis, Chris

    A new fuzzy-rough nearest neighbour (FRNN) classification algorithm is presented in this paper, as an alternative to Sarkar's fuzzy-rough ownership function (FRNN-O) approach. By contrast to the latter, our method uses the nearest neighbours to construct lower and upper approximations of decision classes, and classifies test instances based on their membership to these approximations. In the experimental analysis, we evaluate our approach with both classical fuzzy-rough approximations (based on an implicator and a t-norm), as well as with the recently introduced vaguely quantified rough sets. Preliminary results are very good, and in general FRNN outperforms FRNN-O, as well as the traditional fuzzy nearest neighbour (FNN) algorithm.

  11. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  12. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  14. A hybrid credibility-based fuzzy multiple objective optimisation to differential pricing and inventory policies with arbitrage consideration

    NASA Astrophysics Data System (ADS)

    Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.

    2015-10-01

    In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.

  15. On twelve types of covering-based rough sets.

    PubMed

    Safari, Samira; Hooshmandasl, Mohammad Reza

    2016-01-01

    Covering approximation spaces are a generalization of equivalence-based rough set theories. In this paper, we will consider twelve types of covering based approximation operators by combining four types of covering lower approximation operators and three types of covering upper approximation operators. Then, we will study the properties of these new pairs and show they have most of the common properties among existing covering approximation pairs. Finally, the relation between these new pairs is studied.

  16. Hybrid machine learning technique for forecasting Dhaka stock market timing decisions.

    PubMed

    Banik, Shipra; Khodadad Khan, A F M; Anwer, Mohammad

    2014-01-01

    Forecasting stock market has been a difficult job for applied researchers owing to nature of facts which is very noisy and time varying. However, this hypothesis has been featured by several empirical experiential studies and a number of researchers have efficiently applied machine learning techniques to forecast stock market. This paper studied stock prediction for the use of investors. It is always true that investors typically obtain loss because of uncertain investment purposes and unsighted assets. This paper proposes a rough set model, a neural network model, and a hybrid neural network and rough set model to find optimal buy and sell of a share on Dhaka stock exchange. Investigational findings demonstrate that our proposed hybrid model has higher precision than the single rough set model and the neural network model. We believe this paper findings will help stock investors to decide about optimal buy and/or sell time on Dhaka stock exchange.

  17. Hybrid Machine Learning Technique for Forecasting Dhaka Stock Market Timing Decisions

    PubMed Central

    Banik, Shipra; Khodadad Khan, A. F. M.; Anwer, Mohammad

    2014-01-01

    Forecasting stock market has been a difficult job for applied researchers owing to nature of facts which is very noisy and time varying. However, this hypothesis has been featured by several empirical experiential studies and a number of researchers have efficiently applied machine learning techniques to forecast stock market. This paper studied stock prediction for the use of investors. It is always true that investors typically obtain loss because of uncertain investment purposes and unsighted assets. This paper proposes a rough set model, a neural network model, and a hybrid neural network and rough set model to find optimal buy and sell of a share on Dhaka stock exchange. Investigational findings demonstrate that our proposed hybrid model has higher precision than the single rough set model and the neural network model. We believe this paper findings will help stock investors to decide about optimal buy and/or sell time on Dhaka stock exchange. PMID:24701205

  18. Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)

    NASA Astrophysics Data System (ADS)

    Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan

    2010-05-01

    The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.

  19. Optimisation of rocker sole footwear for prevention of first plantar ulcer: comparison of group-optimised and individually-selected footwear designs.

    PubMed

    Preece, Stephen J; Chapman, Jonathan D; Braunstein, Bjoern; Brüggemann, Gert-Peter; Nester, Christopher J

    2017-01-01

    Appropriate footwear for individuals with diabetes but no ulceration history could reduce the risk of first ulceration. However, individuals who deem themselves at low risk are unlikely to seek out bespoke footwear which is personalised. Therefore, our primary aim was to investigate whether group-optimised footwear designs, which could be prefabricated and delivered in a retail setting, could achieve appropriate pressure reduction, or whether footwear selection must be on a patient-by-patient basis. A second aim was to compare responses to footwear design between healthy participants and people with diabetes in order to understand the transferability of previous footwear research, performed in healthy populations. Plantar pressures were recorded from 102 individuals with diabetes, considered at low risk of ulceration. This cohort included 17 individuals with peripheral neuropathy. We also collected data from 66 healthy controls. Each participant walked in 8 rocker shoe designs (4 apex positions × 2 rocker angles). ANOVA analysis was then used to understand the effect of two design features and descriptive statistics used to identify the group-optimised design. Using 200 kPa as a target, this group-optimised design was then compared to the design identified as the best for each participant (using plantar pressure data). Peak plantar pressure increased significantly as apex position was moved distally and rocker angle reduced ( p  < 0.001). The group-optimised design incorporated an apex at 52% of shoe length, a 20° rocker angle and an apex angle of 95°. With this design 71-81% of peak pressures were below the 200 kPa threshold, both in the full cohort of individuals with diabetes and also in the neuropathic subgroup. Importantly, only small increases (<5%) in this proportion were observed when participants wore footwear which was individually selected. In terms of optimised footwear designs, healthy participants demonstrated the same response as participants with diabetes, despite having lower plantar pressures. This is the first study demonstrating that a group-optimised, generic rocker shoe might perform almost as well as footwear selected on a patient by patient basis in a low risk patient group. This work provides a starting point for clinical evaluation of generic versus personalised pressure reducing footwear.

  20. Investigation of surface porosity measurements and compaction pressure as means to ensure consistent contact angle determinations.

    PubMed

    Holm, René; Borkenfelt, Simon; Allesø, Morten; Andersen, Jens Enevold Thaulov; Beato, Stefania; Holm, Per

    2016-02-10

    Compounds wettability is critical for a number of central processes including disintegration, dispersion, solubilisation and dissolution. It is therefore an important optimisation parameter both in drug discovery but also as guidance for formulation selection and optimisation. Wettability for a compound is determined by its contact angle to a liquid, which in the present study was measured using the sessile drop method applied to a disc compact of the compound. Precise determination of the contact angle is important should it be used to either rank compounds or selected excipients to e.g. increase the wetting from a solid dosage form. Since surface roughness of the compact has been suggested to influence the measurement this study investigated if the surface quality, in terms of surface porosity, had an influence on the measured contact angle. A correlation to surface porosity was observed, however for six out of seven compounds similar results were obtained by applying a standard pressure (866 MPa) to the discs in their preparation. The data presented in the present work therefore suggest that a constant high pressure should be sufficient for most compounds when determining the contact angle. Only for special cases where compounds have poor compressibility would there be a need for a surface-quality-control step before the contact angle determination. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. A Transport Equation Approach to Modeling the Influence of Surface Roughness on Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Langel, Christopher Michael

    A computational investigation has been performed to better understand the impact of surface roughness on the flow over a contaminated surface. This thesis highlights the implementation and development of the roughness amplification model in the flow solver OVERFLOW-2. The model, originally proposed by Dassler, Kozulovic, and Fiala, introduces an additional scalar field roughness amplification quantity. This value is explicitly set at rough wall boundaries using surface roughness parameters and local flow quantities. This additional transport equation allows non-local effects of surface roughness to be accounted for downstream of rough sections. This roughness amplification variable is coupled with the Langtry-Menter model and used to modify the criteria for transition. Results from flat plate test cases show good agreement with experimental transition behavior on the flow over varying sand grain roughness heights. Additional validation studies were performed on a NACA 0012 airfoil with leading edge roughness. The computationally predicted boundary layer development demonstrates good agreement with experimental results. New tests using varying roughness configurations are being carried out at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel to provide further calibration of the roughness amplification method. An overview and preliminary results are provided of this concurrent experimental investigation.

  2. Effect of truncated cone roughness element density on hydrodynamic drag

    NASA Astrophysics Data System (ADS)

    Womack, Kristofer; Schultz, Michael; Meneveau, Charles

    2017-11-01

    An experimental study was conducted on rough-wall, turbulent boundary layer flow with roughness elements whose idealized shape model barnacles that cause hydrodynamic drag in many applications. Varying planform densities of truncated cone roughness elements were investigated. Element densities studied ranged from 10% to 79%. Detailed turbulent boundary layer velocity statistics were recorded with a two-component LDV system on a three-axis traverse. Hydrodynamic roughness length (z0) and skin-friction coefficient (Cf) were determined and compared with the estimates from existing roughness element drag prediction models including Macdonald et al. (1998) and other recent models. The roughness elements used in this work model idealized barnacles, so implications of this data set for ship powering are considered. This research was supported by the Office of Naval Research and by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.

  3. A new fiber optic sensor for inner surface roughness measurement

    NASA Astrophysics Data System (ADS)

    Xu, Xiaomei; Liu, Shoubin; Hu, Hong

    2009-11-01

    In order to measure inner surface roughness of small holes nondestructively, a new fiber optic sensor is researched and developed. Firstly, a new model for surface roughness measurement is proposed, which is based on intensity-modulated fiber optic sensors and scattering modeling of rough surfaces. Secondly, a fiber optical measurement system is designed and set up. Under the help of new techniques, the fiber optic sensor can be miniaturized. Furthermore, the use of micro prism makes the light turn 90 degree, so the inner side surface roughness of small holes can be measured. Thirdly, the fiber optic sensor is gauged by standard surface roughness specimens, and a series of measurement experiments have been done. The measurement results are compared with those obtained by TR220 Surface Roughness Instrument and Form Talysurf Laser 635, and validity of the developed fiber optic sensor is verified. Finally, precision and influence factors of the fiber optic sensor are analyzed.

  4. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.

  5. A centre-free approach for resource allocation with lower bounds

    NASA Astrophysics Data System (ADS)

    Obando, Germán; Quijano, Nicanor; Rakoto-Ravalontsalama, Naly

    2017-09-01

    Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.

  6. Chemical etching of stainless steel 301 for improving performance of electrochemical capacitors in aqueous electrolyte

    NASA Astrophysics Data System (ADS)

    Jeżowski, P.; Nowicki, M.; Grzeszkowiak, M.; Czajka, R.; Béguin, F.

    2015-04-01

    The main purpose of the study was to increase the surface roughness of stainless steel 301 current collectors by etching, in order to improve the electrochemical performance of electrical double-layer capacitors (EDLC) in 1 mol L-1 lithium sulphate electrolyte. Etching was realized in 1:3:30 (HNO3:HCl:H2O) solution with times varying up to 10 min. For the considered 15 μm thick foil and a mass loss around 0.4 wt.%, pitting was uniform, with diameter of pits ranging from 100 to 300 nm. Atomic force microscopy (AFM) showed an increase of average surface roughness (Ra) from 5 nm for the as-received stainless steel foil to 24 nm for the pitted material. Electrochemical impedance spectroscopy realized on EDLCs with coated electrodes either on as-received or pitted foil in 1 mol L-1 Li2SO4 gave equivalent distributed resistance (EDR) of 8 Ω and 2 Ω, respectively, demonstrating a substantial improvement of collector/electrode interface after pitting. Correlatively, the EDLCs with pitted collector displayed a better charge propagation and low ohmic losses even at relatively high current of 20 A g-1. Hence, chemical pitting of stainless steel current collectors is an appropriate method for optimising the performance of EDLCs in neutral aqueous electrolyte.

  7. Improving fMRI reliability in presurgical mapping for brain tumours.

    PubMed

    Stevens, M Tynan R; Clarke, David B; Stroink, Gerhard; Beyea, Steven D; D'Arcy, Ryan Cn

    2016-03-01

    Functional MRI (fMRI) is becoming increasingly integrated into clinical practice for presurgical mapping. Current efforts are focused on validating data quality, with reliability being a major factor. In this paper, we demonstrate the utility of a recently developed approach that uses receiver operating characteristic-reliability (ROC-r) to: (1) identify reliable versus unreliable data sets; (2) automatically select processing options to enhance data quality; and (3) automatically select individualised thresholds for activation maps. Presurgical fMRI was conducted in 16 patients undergoing surgical treatment for brain tumours. Within-session test-retest fMRI was conducted, and ROC-reliability of the patient group was compared to a previous healthy control cohort. Individually optimised preprocessing pipelines were determined to improve reliability. Spatial correspondence was assessed by comparing the fMRI results to intraoperative cortical stimulation mapping, in terms of the distance to the nearest active fMRI voxel. The average ROC-r reliability for the patients was 0.58±0.03, as compared to 0.72±0.02 in healthy controls. For the patient group, this increased significantly to 0.65±0.02 by adopting optimised preprocessing pipelines. Co-localisation of the fMRI maps with cortical stimulation was significantly better for more reliable versus less reliable data sets (8.3±0.9 vs 29±3 mm, respectively). We demonstrated ROC-r analysis for identifying reliable fMRI data sets, choosing optimal postprocessing pipelines, and selecting patient-specific thresholds. Data sets with higher reliability also showed closer spatial correspondence to cortical stimulation. ROC-r can thus identify poor fMRI data at time of scanning, allowing for repeat scans when necessary. ROC-r analysis provides optimised and automated fMRI processing for improved presurgical mapping. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. Analysis of Multilayered Printed Circuit Boards using Computed Tomography

    DTIC Science & Technology

    2014-05-01

    complex PCBs that present a challenge for any testing or fault analysis. Set-to- work testing and fault analysis of any electronic circuit require...Electronic Warfare and Radar Division in December 2010. He is currently in Electro- Optic Countermeasures Group. Samuel works on embedded system design...and software optimisation of complex electro-optical systems, including the set to work and characterisation of these systems. He has a Bachelor of

  9. Integrating professionalism teaching into undergraduate medical education in the UK setting.

    PubMed

    Goldie, John

    2008-06-01

    This paper examines how professionalism teaching might be integrated into undergraduate medical education in the United Kingdom setting. It advocates adopting an outcome-based approach to curriculum planning, using the Scottish Deans' Medical Curriculum Group's (SDMCG) outcomes as a starting point. In discussing the curricular content, potential learning methods and strategies, theoretical considerations are explored. Student selection, assessment and strategies for optimising the educational environment are also considered.

  10. Developing Intervention Strategies to Optimise Body Composition in Early Childhood in South Africa

    PubMed Central

    Tomaz, Simone A.; Stone, Matthew; Hinkley, Trina; Jones, Rachel A.; Louw, Johann; Twine, Rhian; Kahn, Kathleen; Norris, Shane A.

    2017-01-01

    Purpose. The purpose of this research was to collect data to inform intervention strategies to optimise body composition in South African preschool children. Methods. Data were collected in urban and rural settings. Weight status, physical activity, and gross motor skill assessments were conducted with 341 3–6-year-old children, and 55 teachers and parents/caregivers participated in focus groups. Results. Overweight and obesity were a concern in low-income urban settings (14%), but levels of physical activity and gross motor skills were adequate across all settings. Focus group findings from urban and rural settings indicated that teachers would welcome input on leading activities to promote physical activity and gross motor skill development. Teachers and parents/caregivers were also positive about young children being physically active. Recommendations for potential intervention strategies include a teacher-training component, parent/child activity mornings, and a home-based component for parents/caregivers. Conclusion. The findings suggest that an intervention focussed on increasing physical activity and improving gross motor skills per se is largely not required but that contextually relevant physical activity and gross motor skills may still be useful for promoting healthy weight and a vehicle for engaging with teachers and parents/caregivers for promoting other child outcomes, such as cognitive development. PMID:28194417

  11. Rough flows and homogenization in stochastic turbulence

    NASA Astrophysics Data System (ADS)

    Bailleul, I.; Catellier, R.

    2017-10-01

    We provide in this work a tool-kit for the study of homogenisation of random ordinary differential equations, under the form of a friendly-user black box based on the technology of rough flows. We illustrate the use of this setting on the example of stochastic turbulence.

  12. Surface roughness manifestations of deep-seated landslide processes

    NASA Astrophysics Data System (ADS)

    Booth, A. M.; Roering, J. J.; Lamb, M. P.

    2012-12-01

    In many mountainous drainage basins, deep-seated landslides evacuate large volumes of sediment from small surface areas, leaving behind a strong topographic signature that sets landscape roughness over a range of spatial scales. At long spatial wavelengths of hundreds to thousands of meters, landslides tend to inhibit channel incision and limit topographic relief, effectively smoothing the topography at this length scale. However, at short spatial wavelengths on the order of meters, deformation of deep-seated landslides generates surface roughness that allows expert mappers or automated algorithms to distinguish landslides from the surrounding terrain. Here, we directly connect the characteristic spatial wavelengths and amplitudes of this fine scale surface roughness to the underlying landslide deformation processes. We utilize the two-dimensional wavelet transform with high-resolution, airborne LiDAR-derived digital elevation models to systematically document the characteristic length scales and amplitudes of different kinematic units within slow moving earthflows, a common type of deep-seated landslide. In earthflow source areas, discrete slumped blocks generate high surface roughness, reflecting an extensional deformation regime. In earthflow transport zones, where material translates with minimal surface deformation, roughness decreases as other surface processes quickly smooth short wavelength features. In earthflow depositional toes, compression folds and thrust faults again increase short wavelength surface roughness. When an earthflow becomes inactive, roughness in all of these kinematic zones systematically decreases with time, allowing relative dating of earthflow deposits. We also document how each of these roughness expressions depends on earthflow velocity, using sub-pixel change detection software (COSI-Corr) and pairs of orthorectified aerial photographs to determine spatially extensive landslide surface displacements. In source areas, the wavelength of slumped blocks tends to correlate with velocity as predicted by a simple sliding block model, but the amplitude is insensitive to velocity, suggesting that landslide depth rather than velocity sets this characteristic block amplitude. In both transport zones and depositional toes, the amplitude of the surface roughness is higher where the longitudinal gradient in velocity is higher, confirming that differential movement generates and maintains this fine scale roughness.

  13. Selection of representative embankments based on rough set - fuzzy clustering method

    NASA Astrophysics Data System (ADS)

    Bin, Ou; Lin, Zhi-xiang; Fu, Shu-yan; Gao, Sheng-song

    2018-02-01

    The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.

  14. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  15. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  16. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Ru; Guo, Tao; Liu, Jianyi

    2018-03-01

    Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.

  17. The prefabricated building risk decision research of DM technology on the basis of Rough Set

    NASA Astrophysics Data System (ADS)

    Guo, Z. L.; Zhang, W. B.; Ma, L. H.

    2017-08-01

    With the resources crises and more serious pollution, the green building has been strongly advocated by most countries and become a new building style in the construction field. Compared with traditional building, the prefabricated building has its own irreplaceable advantages but is influenced by many uncertainties. So far, a majority of scholars have been studying based on qualitative researches from all of the word. This paper profoundly expounds its significance about the prefabricated building. On the premise of the existing research methods, combined with rough set theory, this paper redefines the factors which affect the prefabricated building risk. Moreover, it quantifies risk factors and establish an expert knowledge base through assessing. And then reduced risk factors about the redundant attributes and attribute values, finally form the simplest decision rule. This simplest decision rule, which is based on the DM technology of rough set theory, provides prefabricated building with a controllable new decision-making method.

  18. Response surface methodology to optimise Accelerated Solvent Extraction of steviol glycosides from Stevia rebaudiana Bertoni leaves.

    PubMed

    Jentzer, Jean-Baptiste; Alignan, Marion; Vaca-Garcia, Carlos; Rigal, Luc; Vilarem, Gérard

    2015-01-01

    Following the approval of steviol glycosides as a food additive in Europe in December 2011, large-scale stevia cultivation will have to be developed within the EU. Thus there is a need to increase the efficiency of stevia evaluation through germplasm enhancement and agronomic improvement programs. To address the need for faster and reproducible sample throughput, conditions for automated extraction of dried stevia leaves using Accelerated Solvent Extraction were optimised. A response surface methodology was used to investigate the influence of three factors: extraction temperature, static time and cycle number on the stevioside and rebaudioside A extraction yields. The model showed that all the factors had an individual influence on the yield. Optimum extraction conditions were set at 100 °C, 4 min and 1 cycle, which yielded 91.8% ± 3.4% of total extractable steviol glycosides analysed. An additional optimisation was achieved by reducing the grind size of the leaves giving a final yield of 100.8% ± 3.3%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    NASA Astrophysics Data System (ADS)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  20. Variability estimation of urban wastewater biodegradable fractions by respirometry.

    PubMed

    Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie

    2005-11-01

    This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.

  1. A rough set-based measurement model study on high-speed railway safety operation.

    PubMed

    Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun

    2018-01-01

    Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.

  2. On the Effects of Surface Roughness on Boundary Layer Transition

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Li, Fei; Chang, Chau-Lyan; Edwards, Jack

    2009-01-01

    Surface roughness can influence laminar-turbulent transition in many different ways. This paper outlines selected analyses performed at the NASA Langley Research Center, ranging in speed from subsonic to hypersonic Mach numbers and highlighting the beneficial as well as adverse roles of the surface roughness in technological applications. The first theme pertains to boundary-layer tripping on the forebody of a hypersonic airbreathing configuration via a spanwise periodic array of trip elements, with the goal of understanding the physical mechanisms underlying roughness-induced transition in a high-speed boundary layer. The effect of an isolated, finite amplitude roughness element on a supersonic boundary layer is considered next. The other set of flow configurations examined herein corresponds to roughness based laminar flow control in subsonic and supersonic swept wing boundary layers. A common theme to all of the above configurations is the need to apply higher fidelity, physics based techniques to develop reliable predictions of roughness effects on laminar-turbulent transition.

  3. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces grinding saving both time and money and allows the science requirements to be better defined. In this study various materials are polished from a fine grind to a fine polish. Each sample's RMS surface roughness is measured at 81 locations in a 9x9 square grid using a Zygo white light interferometer at regular intervals during the polishing process. Each data set is fit with various standard distributions and tested for goodness of fit. We show that the skew in the RMS data changes as a function of polishing time.

  4. A facile approach for reducing the working voltage of Au/TiO2/Au nanostructured memristors by enhancing the local electric field

    NASA Astrophysics Data System (ADS)

    Arab Bafrani, Hamidreza; Ebrahimi, Mahdi; Bagheri Shouraki, Saeed; Moshfegh, Alireza Z.

    2018-01-01

    Memristor devices have attracted tremendous interest due to different applications ranging from nonvolatile data storage to neuromorphic computing units. Exploring the role of surface roughness of the bottom electrode (BE)/active layer interface provides useful guidelines for the optimization of the memristor switching performance. This study focuses on the effect of surface roughness of the BE electrode on the switching characteristics of Au/TiO2/Au three-layer memristor devices. An optimized wet-etching treatment condition was found to modify the surface roughness of the Au BE where the measurement results indicate that the roughness of the Au BE is affected by both duration time and solution concentrations of the wet-etching process. Then we fabricated arrays of TiO2-based nanostructured memristors sandwiched between two sets of cross-bar Au electrode lines (junction area 900 μm2). The results revealed a reduction in the working voltages in current-voltage characteristic of the device performance when increasing the surface roughness at the Au(BE)/TiO2 active layer interface. The set voltage of the device (Vset) significantly decreased from 2.26-1.93 V when we increased the interface roughness from 4.2-13.1 nm. The present work provides information for better understanding the switching mechanism of titanium-dioxide-based devices, and it can be inferred that enhancing the roughness of the Au BE/TiO2 active layer interface leads to a localized non-uniform electric field distribution that plays a vital role in reducing the energy consumption of the device.

  5. Water Quality Assessment in the Harbin Reach of the Songhuajiang River (China) Based on a Fuzzy Rough Set and an Attribute Recognition Theoretical Model

    PubMed Central

    An, Yan; Zou, Zhihong; Li, Ranran

    2014-01-01

    A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643

  6. A three-way approach for protein function classification

    PubMed Central

    2017-01-01

    The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy. PMID:28234929

  7. A three-way approach for protein function classification.

    PubMed

    Ur Rehman, Hafeez; Azam, Nouman; Yao, JingTao; Benso, Alfredo

    2017-01-01

    The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy.

  8. Comparison between performances of three types of manual wheelchairs often distributed in low-resource settings.

    PubMed

    Rispin, Karen; Wee, Joy

    2015-07-01

    This study was conducted to compare the performance of three types of chairs in a low-resource setting. The larger goal was to provide information which will enable more effective use of limited funds by wheelchair manufacturers and suppliers in low-resource settings. The Motivation Rough Terrain and Whirlwind Rough Rider were compared in six skills tests which participants completed in one wheelchair type and then a day later in the other. A hospital-style folding transport wheelchair was also included in one test. For all skills, participants rated the ease or difficulty on a visual analogue scale. For all tracks, distance traveled and the physiological cost index were recorded. Data were analyzed using repeated measures analysis of variance. The Motivation wheelchair outperformed Whirlwind wheelchair on rough and smooth tracks, and in some metrics on the tight spaces track. Motivation and Whirlwind wheelchairs significantly outperformed the hospital transport wheelchair in all metrics on the rough track skills test. This comparative study provides data that are valuable for manufacturers and for those who provide wheelchairs to users. The comparison with the hospital-style transport chair confirms the cost to users of inappropriate wheelchair provision. Implications for Rehabilitation For those with compromised lower limb function, wheelchairs are essential to enable full participation and improved quality of life. Therefore, provision of wheelchairs which effectively enable mobility in the cultures and environments in which people with disabilities live is crucial. This includes low-resource settings where the need for appropriate seating is especially urgent. A repeated measures study to measure wheelchair performances in everyday skills in the setting where wheelchairs are used gives information on the quality of mobility provided by those wheelchairs. This study highlights differences in the performance of three types of wheelchairs often distributed in low-resource settings. This information can improve mobility for wheelchair users in those settings by enabling wheelchair manufacturers to optimize wheelchair design and providers to optimize the use of limited funds.

  9. Development of a core outcome set for effectiveness trials aimed at optimising prescribing in older adults in care homes.

    PubMed

    Millar, Anna N; Daffu-O'Reilly, Amrit; Hughes, Carmel M; Alldred, David P; Barton, Garry; Bond, Christine M; Desborough, James A; Myint, Phyo K; Holland, Richard; Poland, Fiona M; Wright, David

    2017-04-12

    Prescribing medicines for older adults in care homes is known to be sub-optimal. Whilst trials testing interventions to optimise prescribing in this setting have been published, heterogeneity in outcome reporting has hindered comparison of interventions, thus limiting evidence synthesis. The aim of this study was to develop a core outcome set (COS), a list of outcomes which should be measured and reported, as a minimum, for all effectiveness trials involving optimising prescribing in care homes. The COS was developed as part of the Care Homes Independent Pharmacist Prescribing Study (CHIPPS). A long-list of outcomes was identified through a review of published literature and stakeholder input. Outcomes were reviewed and refined prior to entering a two-round online Delphi exercise and then distributed via a web link to the CHIPPS Management Team, a multidisciplinary team including pharmacists, doctors and Patient Public Involvement representatives (amongst others), who comprised the Delphi panel. The Delphi panellists (n = 19) rated the importance of outcomes on a 9-point Likert scale from 1 (not important) to 9 (critically important). Consensus for an outcome being included in the COS was defined as ≥70% participants scoring 7-9 and <15% scoring 1-3. Exclusion was defined as ≥70% scoring 1-3 and <15% 7-9. Individual and group scores were fed back to participants alongside the second questionnaire round, which included outcomes for which no consensus had been achieved. A long-list of 63 potential outcomes was identified. Refinement of this long-list of outcomes resulted in 29 outcomes, which were included in the Delphi questionnaire (round 1). Following both rounds of the Delphi exercise, 13 outcomes (organised into seven overarching domains: medication appropriateness, adverse drug events, prescribing errors, falls, quality of life, all-cause mortality and admissions to hospital (and associated costs)) met the criteria for inclusion in the final COS. We have developed a COS for effectiveness trials aimed at optimising prescribing in older adults in care homes using robust methodology. Widespread adoption of this COS will facilitate evidence synthesis between trials. Future work should focus on evaluating appropriate tools for these key outcomes to further reduce heterogeneity in outcome measurement in this context.

  10. Measuring uncertainty by extracting fuzzy rules using rough sets

    NASA Technical Reports Server (NTRS)

    Worm, Jeffrey A.

    1991-01-01

    Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.

  11. Fabrication of anti-adhesion surfaces on aluminium substrates of rubber plastic moulds using electrolysis plasma treatment

    NASA Astrophysics Data System (ADS)

    Meng, Jianbing; Dong, Xiaojuan; Wei, Xiuting; Yin, Zhanmin

    2015-04-01

    An anti-adhesion surface with a water contact angle of 167° was fabricated on aluminium samples of rubber plastic moulds by electrolysis plasma treatment using mixed electrolytes of C6H5O7(NH4)3 and Na2SO4, followed by fluorination. To optimise the fabrication conditions, several important processing parameters such as the discharge voltage, discharge time, concentrations of supporting electrolyte and stearic acid ethanol solution were examined systematically. Using scanning electron microscopy (SEM) to analyse surfaces morphology, micrometer scale pits, and protrusions were found on the surface, with numerous nanometer mastoids contained in the protrusions. These binary micro/nano-scale structures, which are similar to the micro-structures of soil-burrowing animals, play a critical role in achieving low adhesion properties. Otherwise, the anti-adhesion behaviours of the resulting samples were analysed by the atomic force microscope (AFM), Fourier-transform infrared spectrophotometer (FTIR), electrons probe micro-analyzer (EPMA), optical contact angle meter, digital Vickers microhardness (Hv) tester, and electronic universal testing. The results show that the electrolysis plasma treatment does not require complex processing parameters, using a simple device, and is an environment-friendly and effective method. Under the optimised conditions, the contact angle (CA) for the modified anti-adhesion surface is up to 167°, the sliding angle (SA) is less than 2°, roughness of the sample surface is only 0.409μm. Moreover, the adhesion force and Hv are 0. 9KN and 385, respectively.

  12. Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics

    NASA Astrophysics Data System (ADS)

    Vass, J.; Šmíd, R.; Randall, R. B.; Sovka, P.; Cristalli, C.; Torcianti, B.

    2008-04-01

    This paper presents a statistical technique to enhance vibration signals measured by laser Doppler vibrometry (LDV). The method has been optimised for LDV signals measured on bearings of universal electric motors and applied to quality control of washing machines. Inherent problems of LDV are addressed, particularly the speckle noise occurring when rough surfaces are measured. The presence of speckle noise is detected using a new scalar indicator kurtosis ratio (KR), specifically designed to quantify the amount of random impulses generated by this noise. The KR is a ratio of the standard kurtosis and a robust estimate of kurtosis, thus indicating the outliers in the data. Since it is inefficient to reject the signals affected by the speckle noise, an algorithm for selecting an undistorted portion of a signal is proposed. The algorithm operates in the time domain and is thus fast and simple. The algorithm includes band-pass filtering and segmentation of the signal, as well as thresholding of the KR computed for each filtered signal segment. Algorithm parameters are discussed in detail and instructions for optimisation are provided. Experimental results demonstrate that speckle noise is effectively avoided in severely distorted signals, thus improving the signal-to-noise ratio (SNR) significantly. Typical faults are finally detected using squared envelope analysis. It is also shown that the KR of the band-pass filtered signal is related to the spectral kurtosis (SK).

  13. Effect of saliva and blood contamination after etching upon the shear bond strength between composite resin and enamel

    NASA Astrophysics Data System (ADS)

    Armadi, A. S.; Usman, M.; Suprastiwi, E.

    2017-08-01

    The aim of this study was to find out the surface roughness of composite resin veneer after brushing. In this study, 24 specimens of composite resin veneer are divided into three subgroups: brushed without toothpaste, brushed with non-herbal toothpaste, and brushed with herbal toothpaste. Brushing was performed for one set of 5,000 strokes and continued for a second set of 5,000 strokes. Roughness of composite resin veneer was determined using a Surface Roughness Tester. The results were statistically analyzed using Kruskal-Wallis nonparametric test and Post Hoc Mann-Whitney. The results indicate that the highest difference among the Ra values occurred within the subgroup that was brushed with the herbal toothpaste. In conclusion, the herbal toothpaste produced a rougher surface on composite resin veneer compared to non-herbal toothpaste.

  14. Simulation for Rough Mill Options

    Treesearch

    Janice K. Wiedenbeck

    1992-01-01

    How is rough mill production affected by lumber length? Lumber grade? Cutting quality? Cutting sizes? How would equipment purchase plans be prioritized? How do personnel shifts affect system productivity? What effect would a reduction in machine set-up time have on material flow? Simulation modeling is being widely used in many industries to provide valuable insight...

  15. Benchmarking performance measurement and lean manufacturing in the rough mill

    Treesearch

    Dan Cumbo; D. Earl Kline; Matthew S. Bumgardner

    2006-01-01

    Lean manufacturing represents a set of tools and a stepwise strategy for achieving smooth, predictable product flow, maximum product flexibility, and minimum system waste. While lean manufacturing principles have been successfully applied to some components of the secondary wood products value stream (e.g., moulding, turning, assembly, and finishing), the rough mill is...

  16. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    PubMed Central

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494

  17. A Granular Self-Organizing Map for Clustering and Gene Selection in Microarray Data.

    PubMed

    Ray, Shubhra Sankar; Ganivada, Avatharam; Pal, Sankar K

    2016-09-01

    A new granular self-organizing map (GSOM) is developed by integrating the concept of a fuzzy rough set with the SOM. While training the GSOM, the weights of a winning neuron and the neighborhood neurons are updated through a modified learning procedure. The neighborhood is newly defined using the fuzzy rough sets. The clusters (granules) evolved by the GSOM are presented to a decision table as its decision classes. Based on the decision table, a method of gene selection is developed. The effectiveness of the GSOM is shown in both clustering samples and developing an unsupervised fuzzy rough feature selection (UFRFS) method for gene selection in microarray data. While the superior results of the GSOM, as compared with the related clustering methods, are provided in terms of β -index, DB-index, Dunn-index, and fuzzy rough entropy, the genes selected by the UFRFS are not only better in terms of classification accuracy and a feature evaluation index, but also statistically more significant than the related unsupervised methods. The C-codes of the GSOM and UFRFS are available online at http://avatharamg.webs.com/software-code.

  18. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    PubMed

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  19. Analysis of the shrinkage at the thick plate part using response surface methodology

    NASA Astrophysics Data System (ADS)

    Hatta, N. M.; Azlan, M. Z.; Shayfull, Z.; Roselina, S.; Nasir, S. M.

    2017-09-01

    Injection moulding is well known for its manufacturing process especially in producing plastic products. To measure the final product quality, there are lots of precautions to be taken into such as parameters setting at the initial stage of the process. Sometimes, if these parameters were set up wrongly, defects may be occurred and one of the well-known defects in the injection moulding process is a shrinkage. To overcome this problem, a maximisation at the precaution stage by making an optimal adjustment on the parameter setting need to be done and this paper focuses on analysing the shrinkage by optimising the parameter at thick plate part with the help of Response Surface Methodology (RSM) and ANOVA analysis. From the previous study, the outstanding parameter gained from the optimisation method in minimising the shrinkage at the moulded part was packing pressure. Therefore, with the reference from the previous literature, packing pressure was selected as the parameter setting for this study with other three parameters which are melt temperature, cooling time and mould temperature. The analysis of the process was obtained from the simulation by Autodesk Moldflow Insight (AMI) software and the material used for moulded part was Acrylonitrile Butadiene Styrene (ABS). The analysis and result were obtained and it found that the shrinkage can be minimised and the significant parameters were found as packing pressure, mould temperature and melt temperature.

  20. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  1. Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.

    PubMed

    Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin

    2014-02-01

    Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Prediction of multi performance characteristics of wire EDM process using grey ANFIS

    NASA Astrophysics Data System (ADS)

    Kumanan, Somasundaram; Nair, Anish

    2017-09-01

    Super alloys are used to fabricate components in ultra-supercritical power plants. These hard to machine materials are processed using non-traditional machining methods like Wire cut electrical discharge machining and needs attention. This paper details about multi performance optimization of wire EDM process using Grey ANFIS. Experiments are designed to establish the performance characteristics of wire EDM such as surface roughness, material removal rate, wire wear rate and geometric tolerances. The control parameters are pulse on time, pulse off time, current, voltage, flushing pressure, wire tension, table feed and wire speed. Grey relational analysis is employed to optimise the multi objectives. Analysis of variance of the grey grades is used to identify the critical parameters. A regression model is developed and used to generate datasets for the training of proposed adaptive neuro fuzzy inference system. The developed prediction model is tested for its prediction ability.

  3. The Backscattering Phase Function for a Sphere with a Two-Scale Relief of Rough Surface

    NASA Astrophysics Data System (ADS)

    Klass, E. V.

    2017-12-01

    The backscattering of light from spherical surfaces characterized by one and two-scale roughness reliefs has been investigated. The analysis is performed using the three-dimensional Monte-Carlo program POKS-RG (geometrical-optics approximation), which makes it possible to take into account the roughness of objects under study by introducing local geometries of different levels. The geometric module of the program is aimed at describing objects by equations of second-order surfaces. One-scale roughness is set as an ensemble of geometric figures (convex or concave halves of ellipsoids or cones). The two-scale roughness is modeled by convex halves of ellipsoids, with surface containing ellipsoidal pores. It is shown that a spherical surface with one-scale convex inhomogeneities has a flatter backscattering phase function than a surface with concave inhomogeneities (pores). For a sphere with two-scale roughness, the dependence of the backscattering intensity is found to be determined mostly by the lower-level inhomogeneities. The influence of roughness on the dependence of the backscattering from different spatial regions of spherical surface is analyzed.

  4. Social Connections for Older People with Intellectual Disability in Ireland: Results from Wave One of IDS-TILDA

    ERIC Educational Resources Information Center

    McCausland, Darren; McCallion, Philip; Cleary, Eimear; McCarron, Mary

    2016-01-01

    Background: The literature on influences of community versus congregated settings raises questions about how social inclusion can be optimised for people with intellectual disability. This study examines social contacts for older people with intellectual disability in Ireland, examining differences in social connection for adults with intellectual…

  5. High-level ab initio studies of NO(X2Π)-O2(X3Σg -) van der Waals complexes in quartet states

    NASA Astrophysics Data System (ADS)

    Grein, Friedrich

    2018-05-01

    Geometry optimisations were performed on nine different structures of NO(X2Π)-O2(X3Σg-) van der Waals complexes in their quartet states, using the explicitly correlated RCCSD(T)-F12b method with basis sets up to the cc-pVQZ-F12 level. For the most stable configurations, counterpoise-corrected optimisations as well as extrapolations to the complete basis set (CBS) were performed. The X structure in the 4A‧ state was found to be most stable, with a CBS binding energy of -157 cm-1. The slipped tilted structures with N closer to O2 (Slipt-N), as well as the slipped parallel structure with O of NO closer to O2 (Slipp-O) in 4A″ states have binding energies of about -130 cm-1. C2v and linear complexes are less stable. According to calculated harmonic frequencies, the X isomer is bound. Isotropic hyperfine coupling constants of the complex are compared with those of the monomers.

  6. 3D fault curvature and fractal roughness: Insights for rupture dynamics and ground motions using a Discontinous Galerkin method

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Gabriel, Alice-Agnes

    2017-04-01

    Natural fault geometries are subject to a large degree of uncertainty. Their geometrical structure is not directly observable and may only be inferred from surface traces, or geophysical measurements. Most studies aiming at assessing the potential seismic hazard of natural faults rely on idealised shaped models, based on observable large-scale features. Yet, real faults are wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. Dynamic rupture simulations aim to capture the observed complexity of earthquake sources and ground-motions. From a numerical point of view, incorporating rough faults in such simulations is challenging - it requires optimised codes able to run efficiently on high-performance computers and simultaneously handle complex geometries. Physics-based rupture dynamics hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Moreover, the simulated ground-motions present many similarities with observed ground-motions records. Thus, such simulations may foster our understanding of earthquake source processes, and help deriving more accurate seismic hazard estimates. In this presentation, the software package SeisSol (www.seissol.org), based on an ADER-Discontinuous Galerkin scheme, is used to solve the spontaneous dynamic earthquake rupture problem. The usage of tetrahedral unstructured meshes naturally allows for complicated fault geometries. However, SeisSol's high-order discretisation in time and space is not particularly suited for small-scale fault roughness. We will demonstrate modelling conditions under which SeisSol resolves rupture dynamics on rough faults accurately. The strong impact of the geometric gradient of the fault surface on the rupture process is then shown in 3D simulations. Following, the benefits of explicitly modelling fault curvature and roughness, in distinction to prescribing heterogeneous initial stress conditions on a planar fault, is demonstrated. Furthermore, we show that rupture extend, rupture front coherency and rupture speed are highly dependent on the initial amplitude of stress acting on the fault, defined by the normalized prestress factor R, the ratio of the potential stress drop over the breakdown stress drop. The effects of fault complexity are particularly pronounced for lower R. By low-pass filtering a rough fault at several cut-off wavelengths, we then try to capture rupture complexity using a simplified fault geometry. We find that equivalent source dynamics can only be obtained using a scarcely filtered fault associated with a reduced stress level. To investigate the wavelength-dependent roughness effect, the fault geometry is bandpass-filtered over several spectral ranges. We show that geometric fluctuations cause rupture velocity fluctuations of similar length scale. The impact of fault geometry is especially pronounced when the rupture front velocity is near supershear. Roughness fluctuations significantly smaller than the rupture front characteristic dimension (cohesive zone size) affect only macroscopic rupture properties, thus, posing a minimum length scale limiting the required resolution of 3D fault complexity. Lastly, the effect of fault curvature and roughness on the simulated ground-motions is assessed. Despite employing a simple linear slip weakening friction law, the simulated ground-motions compare well with estimates from ground motions prediction equations, even at relatively high frequencies.

  7. Modeling and evaluating of surface roughness prediction in micro-grinding on soda-lime glass considering tool characterization

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Gong, Yadong; Wang, Jinsheng

    2013-11-01

    The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography, which would provide significant research theory and experimental reference of material removal mechanism in micro-grinding of soda-lime glass.

  8. Comparison of two metrological approaches for the prediction of human haptic perception

    NASA Astrophysics Data System (ADS)

    Neumann, Annika; Frank, Daniel; Vondenhoff, Thomas; Schmitt, Robert

    2016-06-01

    Haptic perception is regarded as a key component of customer appreciation and acceptance for various products. The prediction of customers’ haptic perception is of interest both during product development and production phases. This paper presents the results of a multivariate analysis between perceived roughness and texture related surface measurements, to examine whether perceived roughness can be accurately predicted using technical measurements. Studies have shown that standardized measurement parameters, such as the roughness coefficients (e.g. Rz or Ra), do not show a one-dimensional linear correlation with the human perception (of roughness). Thus, an alternative measurement method was compared to standard measurements of roughness, in regard to its capability of predicting perceived roughness through technical measurements. To estimate perceived roughness, an experimental study was conducted in which 102 subjects evaluated four sets of 12 different geometrical surface structures regarding their relative perceived roughness. The two different metrological procedures were examined in relation to their capability to predict the perceived roughness of the subjects stated within the study. The standardized measurements of the surface roughness were made using a structured light 3D-scanner. As an alternative method, surface induced vibrations were measured by a finger-like sensor during robot-controlled traverse over a surface. The presented findings provide a better understanding of the predictability of human haptic perception using technical measurements.

  9. The role of connectedness in haptic object perception.

    PubMed

    Plaisier, Myrthe A; van Polanen, Vonne; Kappers, Astrid M L

    2017-03-02

    We can efficiently detect whether there is a rough object among a set of smooth objects using our sense of touch. We can also quickly determine the number of rough objects in our hand. In this study, we investigated whether the perceptual processing of rough and smooth objects is influenced if these objects are connected. In Experiment 1, participants were asked to identify whether there were exactly two rough target spheres among smooth distractor spheres, while we recorded their response times. The spheres were connected to form pairs: rough spheres were paired together and smooth spheres were paired together ('within pairs arrangement'), or a rough and a smooth sphere were connected ('between pairs arrangement'). Participants responded faster when the spheres in a pair were identical. In Experiment 2, we found that the advantage for within pairs arrangements was not driven by feature saliency. Overall our results show that haptic information is processed faster when targets were connected together compared to when targets were connected to distractors.

  10. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  11. ROMI-RIP: Rough Mill RIP-first simulator user's guide

    Treesearch

    R. Edward Thomas

    1995-01-01

    The ROugh Mill RIP-first simulator (ROMI-RIP) is a computer software package for IBM compatible personal computers that simulates current industrial practices for gang-ripping lumber. This guide shows the user how to set and examine the results of simulations regarding current or proposed mill practices. ROMI-RIP accepts cutting bills with up to 300 different part...

  12. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    PubMed Central

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718

  13. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    PubMed

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  14. Dynamic mortar finite element method for modeling of shear rupture on frictional rough surfaces

    NASA Astrophysics Data System (ADS)

    Tal, Yuval; Hager, Bradford H.

    2017-09-01

    This paper presents a mortar-based finite element formulation for modeling the dynamics of shear rupture on rough interfaces governed by slip-weakening and rate and state (RS) friction laws, focusing on the dynamics of earthquakes. The method utilizes the dual Lagrange multipliers and the primal-dual active set strategy concepts, together with a consistent discretization and linearization of the contact forces and constraints, and the friction laws to obtain a semi-smooth Newton method. The discretization of the RS friction law involves a procedure to condense out the state variables, thus eliminating the addition of another set of unknowns into the system. Several numerical examples of shear rupture on frictional rough interfaces demonstrate the efficiency of the method and examine the effects of the different time discretization schemes on the convergence, energy conservation, and the time evolution of shear traction and slip rate.

  15. Optimisation of Ferrochrome Addition Using Multi-Objective Evolutionary and Genetic Algorithms for Stainless Steel Making via AOD Converter

    NASA Astrophysics Data System (ADS)

    Behera, Kishore Kumar; Pal, Snehanshu

    2018-03-01

    This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.

  16. Optimising the production of succinate and lactate in Escherichia coli using a hybrid of artificial bee colony algorithm and minimisation of metabolic adjustment.

    PubMed

    Tang, Phooi Wah; Choon, Yee Wen; Mohamad, Mohd Saberi; Deris, Safaai; Napis, Suhaimi

    2015-03-01

    Metabolic engineering is a research field that focuses on the design of models for metabolism, and uses computational procedures to suggest genetic manipulation. It aims to improve the yield of particular chemical or biochemical products. Several traditional metabolic engineering methods are commonly used to increase the production of a desired target, but the products are always far below their theoretical maximums. Using numeral optimisation algorithms to identify gene knockouts may stall at a local minimum in a multivariable function. This paper proposes a hybrid of the artificial bee colony (ABC) algorithm and the minimisation of metabolic adjustment (MOMA) to predict an optimal set of solutions in order to optimise the production rate of succinate and lactate. The dataset used in this work was from the iJO1366 Escherichia coli metabolic network. The experimental results include the production rate, growth rate and a list of knockout genes. From the comparative analysis, ABCMOMA produced better results compared to previous works, showing potential for solving genetic engineering problems. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  17. A novel swarm intelligence algorithm for finding DNA motifs.

    PubMed

    Lei, Chengwei; Ruan, Jianhua

    2009-01-01

    Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.

  18. Calibration of phoswich-based lung counting system using realistic chest phantom.

    PubMed

    Manohari, M; Mathiyarasu, R; Rajagopal, V; Meenakshisundaram, V; Indira, R

    2011-03-01

    A phoswich detector, housed inside a low background steel room, coupled with a state-of-art pulse shape discrimination (PSD) electronics is recently established at Radiological Safety Division of IGCAR for in vivo monitoring of actinides. The various parameters of PSD electronics were optimised to achieve efficient background reduction in low-energy regions. The PSD with optimised parameters has reduced steel room background from 9.5 to 0.28 cps in the 17 keV region and 5.8 to 0.3 cps in the 60 keV region. The Figure of Merit for the timing spectrum of the system is 3.0. The true signal loss due to PSD was found to be less than 2 %. The phoswich system was calibrated with Lawrence Livermore National Laboratory realistic chest phantom loaded with (241)Am activity tagged lung set. Calibration factors for varying chest wall composition and chest wall thickness in terms of muscle equivalent chest wall thickness were established. (241)Am activity in the JAERI phantom which was received as a part of IAEA inter-comparison exercise was estimated. This paper presents the optimisation of PSD electronics and the salient results of the calibration.

  19. Implementing large-scale programmes to optimise the health workforce in low- and middle-income settings: a multicountry case study synthesis.

    PubMed

    Gopinathan, Unni; Lewin, Simon; Glenton, Claire

    2014-12-01

    To identify factors affecting the implementation of large-scale programmes to optimise the health workforce in low- and middle-income countries. We conducted a multicountry case study synthesis. Eligible programmes were identified through consultation with experts and using Internet searches. Programmes were selected purposively to match the inclusion criteria. Programme documents were gathered via Google Scholar and PubMed and from key informants. The SURE Framework - a comprehensive list of factors that may influence the implementation of health system interventions - was used to organise the data. Thematic analysis was used to identify the key issues that emerged from the case studies. Programmes from Brazil, Ethiopia, India, Iran, Malawi, Venezuela and Zimbabwe were selected. Key system-level factors affecting the implementation of the programmes were related to health worker training and continuing education, management and programme support structures, the organisation and delivery of services, community participation, and the sociopolitical environment. Existing weaknesses in health systems may undermine the implementation of large-scale programmes to optimise the health workforce. Changes in the roles and responsibilities of cadres may also, in turn, impact the health system throughout. © 2014 John Wiley & Sons Ltd.

  20. Rough set soft computing cancer classification and network: one stone, two birds.

    PubMed

    Zhang, Yue

    2010-07-15

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.

  1. Rough Set Approach to Incomplete Multiscale Information System

    PubMed Central

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  2. Simultaneous feature selection and parameter optimisation using an artificial ant colony: case study of melting point prediction.

    PubMed

    O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo

    2008-10-29

    We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.

  3. Validation and optimisation of an ICD-10-coded case definition for sepsis using administrative health data

    PubMed Central

    Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J

    2015-01-01

    Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284

  4. ATLAS software configuration and build tool optimisation

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.

  5. ROMI 4.0: Rough mill simulator 4.0 users manual

    Treesearch

    R. Edward Thomas; Timo Grueneberg; Urs Buehlmann

    2015-01-01

    The Rough MIll simulator (ROMI Version 4.0) is a computer software package for personal computers (PCs) that simulates current industrial practices for rip-first, chop-first, and rip and chop-first lumber processing. This guide shows how to set up the software; design, implement, and execute simulations; and examine the results. ROMI 4.0 accepts cutting bills with as...

  6. The fission yeast cytokinetic contractile ring regulates septum shape and closure

    PubMed Central

    Thiyagarajan, Sathish; Munteanu, Emilia Laura; Arasada, Rajesh; Pollard, Thomas D.; O'Shaughnessy, Ben

    2015-01-01

    ABSTRACT During cytokinesis, fission yeast and other fungi and bacteria grow a septum that divides the cell in two. In fission yeast closure of the circular septum hole by the β-glucan synthases (Bgs) and other glucan synthases in the plasma membrane is tightly coupled to constriction of an actomyosin contractile ring attached to the membrane. It is unknown how septum growth is coordinated over scales of several microns to maintain septum circularity. Here, we documented the shapes of ingrowing septum edges by measuring the roughness of the edges, a measure of the deviation from circularity. The roughness was small, with spatial correlations indicative of spatially coordinated growth. We hypothesized that Bgs-mediated septum growth is mechanosensitive and coupled to contractile ring tension. A mathematical model showed that ring tension then generates almost circular septum edges by adjusting growth rates in a curvature-dependent fashion. The model reproduced experimental roughness statistics and showed that septum synthesis sets the mean closure rate. Our results suggest that the fission yeast cytokinetic ring tension does not set the constriction rate but regulates septum closure by suppressing roughness produced by inherently stochastic molecular growth processes. PMID:26240178

  7. The fission yeast cytokinetic contractile ring regulates septum shape and closure.

    PubMed

    Thiyagarajan, Sathish; Munteanu, Emilia Laura; Arasada, Rajesh; Pollard, Thomas D; O'Shaughnessy, Ben

    2015-10-01

    During cytokinesis, fission yeast and other fungi and bacteria grow a septum that divides the cell in two. In fission yeast closure of the circular septum hole by the β-glucan synthases (Bgs) and other glucan synthases in the plasma membrane is tightly coupled to constriction of an actomyosin contractile ring attached to the membrane. It is unknown how septum growth is coordinated over scales of several microns to maintain septum circularity. Here, we documented the shapes of ingrowing septum edges by measuring the roughness of the edges, a measure of the deviation from circularity. The roughness was small, with spatial correlations indicative of spatially coordinated growth. We hypothesized that Bgs-mediated septum growth is mechanosensitive and coupled to contractile ring tension. A mathematical model showed that ring tension then generates almost circular septum edges by adjusting growth rates in a curvature-dependent fashion. The model reproduced experimental roughness statistics and showed that septum synthesis sets the mean closure rate. Our results suggest that the fission yeast cytokinetic ring tension does not set the constriction rate but regulates septum closure by suppressing roughness produced by inherently stochastic molecular growth processes. © 2015. Published by The Company of Biologists Ltd.

  8. Using Machine-Learning and Visualisation to Facilitate Learner Interpretation of Source Material

    ERIC Educational Resources Information Center

    Wolff, Annika; Mulholland, Paul; Zdrahal, Zdenek

    2014-01-01

    This paper describes an approach for supporting inquiry learning from source materials, realised and tested through a tool-kit. The approach is optimised for tasks that require a student to make interpretations across sets of resources, where opinions and justifications may be hard to articulate. We adopt a dialogue-based approach to learning…

  9. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    NASA Astrophysics Data System (ADS)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  10. Active-passive synergy for interpreting ocean L-band emissivity: Results from the CAROLS airborne campaigns

    NASA Astrophysics Data System (ADS)

    Martin, A. C. H.; Boutin, J.; Hauser, D.; Dinnat, E. P.

    2014-08-01

    The impact of the ocean surface roughness on the ocean L-band emissivity is investigated using simultaneous airborne measurements from an L-band radiometer (CAROLS) and from a C-band scatterometer (STORM) acquired in the Gulf of Biscay (off-the French Atlantic coasts) in November 2010. Two synergetic approaches are used to investigate the impact of surface roughness on the L-band brightness temperature (Tb). First, wind derived from the scatterometer measurements is used to analyze the roughness contribution to Tb as a function of wind and compare it with the one simulated by SMOS and Aquarius roughness models. Then residuals from this mean relationship are analyzed in terms of mean square slope derived from the STORM instrument. We show improvement of new radiometric roughness models derived from SMOS and Aquarius satellite measurements in comparison with prelaunch models. Influence of wind azimuth on Tb could not be evidenced from our data set. However, we point out the importance of taking into account large roughness scales (>20 cm) in addition to small roughness scale (5 cm) rapidly affected by wind to interpret radiometric measurements far from nadir. This was made possible thanks to simultaneous estimates of large and small roughness scales using STORM at small (7-16°) and large (30°) incidence angles.

  11. Approach to the determination of the contact angle in hydrophobic samples with simultaneous correction of the effect of the roughness

    NASA Astrophysics Data System (ADS)

    Domínguez, Noemí; Castilla, Pau; Linzoain, María Eugenia; Durand, Géraldine; García, Cristina; Arasa, Josep

    2018-04-01

    This work presents the validation study of a method developed to measure contact angles with a confocal device in a set of hydrophobic samples. The use of this device allows the evaluation of the roughness of the surface and the determination of the contact angle in the same area of the sample. Furthermore, a theoretical evaluation of the impact of the roughness of a nonsmooth surface in the calculation of the contact angle when it is not taken into account according to Wenzel's model is also presented.

  12. How can general paediatric training be optimised in highly specialised tertiary settings? Twelve tips from an interview-based study of trainees.

    PubMed

    Al-Yassin, Amina; Long, Andrew; Sharma, Sanjiv; May, Joanne

    2017-01-01

    Both general and subspecialty paediatric trainees undertake attachments in highly specialised tertiary hospitals. Trainee feedback suggests that mismatches in expectations between trainees and supervisors and a perceived lack of educational opportunities may lead to trainee dissatisfaction in such settings. With the 'Shape of Training' review (reshaping postgraduate training in the UK to focus on more general themes), this issue is likely to become more apparent. We wished to explore the factors that contribute to a positive educational environment and training experience and identify how this may be improved in highly specialised settings. General paediatric trainees working at all levels in subspecialty teams at a tertiary hospital were recruited (n=12). Semistructured interviews were undertaken to explore the strengths and weaknesses of training in such a setting and how this could be optimised. Appreciative inquiry methodology was used to identify areas of perceived best practice and consider how these could be promoted and disseminated. Twelve best practice themes were identified: (1) managing expectations by acknowledging the challenges; (2) educational contracting to identify learning needs and opportunities; (3) creative educational supervision; (4) centralised teaching events; (5) signposting learning opportunities; (6) curriculum-mapped pan-hospital teaching programmes; (7) local faculty groups with trainee representation; (8) interprofessional learning; (9) pastoral support systems; (10) crossover weeks to increase clinical exposure; (11) adequate clinical supervision; and (12) rota design to include teaching and clinic time. Tertiary settings have strengths, as well as challenges, for general paediatric training. Twelve trainee-generated tips have been identified to capitalise on the educational potential within these settings. Trainee feedback is essential to diagnose and improve educational environments and appreciative inquiry is a useful tool for this purpose.

  13. How can general paediatric training be optimised in highly specialised tertiary settings? Twelve tips from an interview-based study of trainees

    PubMed Central

    Al-Yassin, Amina; Long, Andrew; Sharma, Sanjiv; May, Joanne

    2017-01-01

    Objectives Both general and subspecialty paediatric trainees undertake attachments in highly specialised tertiary hospitals. Trainee feedback suggests that mismatches in expectations between trainees and supervisors and a perceived lack of educational opportunities may lead to trainee dissatisfaction in such settings. With the ‘Shape of Training’ review (reshaping postgraduate training in the UK to focus on more general themes), this issue is likely to become more apparent. We wished to explore the factors that contribute to a positive educational environment and training experience and identify how this may be improved in highly specialised settings. Methods General paediatric trainees working at all levels in subspecialty teams at a tertiary hospital were recruited (n=12). Semistructured interviews were undertaken to explore the strengths and weaknesses of training in such a setting and how this could be optimised. Appreciative inquiry methodology was used to identify areas of perceived best practice and consider how these could be promoted and disseminated. Results Twelve best practice themes were identified: (1) managing expectations by acknowledging the challenges; (2) educational contracting to identify learning needs and opportunities; (3) creative educational supervision; (4) centralised teaching events; (5) signposting learning opportunities; (6) curriculum-mapped pan-hospital teaching programmes; (7) local faculty groups with trainee representation; (8) interprofessional learning; (9) pastoral support systems; (10) crossover weeks to increase clinical exposure; (11) adequate clinical supervision; and (12) rota design to include teaching and clinic time. Conclusions Tertiary settings have strengths, as well as challenges, for general paediatric training. Twelve trainee-generated tips have been identified to capitalise on the educational potential within these settings. Trainee feedback is essential to diagnose and improve educational environments and appreciative inquiry is a useful tool for this purpose. PMID:29637130

  14. Evolvable rough-block-based neural network and its biomedical application to hypoglycemia detection system.

    PubMed

    San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung

    2014-08-01

    This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.

  15. Fingerprinting the type of line edge roughness

    NASA Astrophysics Data System (ADS)

    Fernández Herrero, A.; Pflüger, M.; Scholze, F.; Soltwisch, V.

    2017-06-01

    Lamellar gratings are widely used diffractive optical elements and are prototypes of structural elements in integrated electronic circuits. EUV scatterometry is very sensitive to structure details and imperfections, which makes it suitable for the characterization of nanostructured surfaces. As compared to X-ray methods, EUV scattering allows for steeper angles of incidence, which is highly preferable for the investigation of small measurement fields on semiconductor wafers. For the control of the lithographic manufacturing process, a rapid in-line characterization of nanostructures is indispensable. Numerous studies on the determination of regular geometry parameters of lamellar gratings from optical and Extreme Ultraviolet (EUV) scattering also investigated the impact of roughness on the respective results. The challenge is to appropriately model the influence of structure roughness on the diffraction intensities used for the reconstruction of the surface profile. The impact of roughness was already studied analytically but for gratings with a periodic pseudoroughness, because of practical restrictions of the computational domain. Our investigation aims at a better understanding of the scattering caused by line roughness. We designed a set of nine lamellar Si-gratings to be studied by EUV scatterometry. It includes one reference grating with no artificial roughness added, four gratings with a periodic roughness distribution, two with a prevailing line edge roughness (LER) and another two with line width roughness (LWR), and four gratings with a stochastic roughness distribution (two with LER and two with LWR). We show that the type of line roughness has a strong impact on the diffuse scatter angular distribution. Our experimental results are not described well by the present modelling approach based on small, periodically repeated domains.

  16. Optimising the Inflammatory Bowel Disease Unit to Improve Quality of Care: Expert Recommendations.

    PubMed

    Louis, Edouard; Dotan, Iris; Ghosh, Subrata; Mlynarsky, Liat; Reenaers, Catherine; Schreiber, Stefan

    2015-08-01

    The best care setting for patients with inflammatory bowel disease [IBD] may be in a dedicated unit. Whereas not all gastroenterology units have the same resources to develop dedicated IBD facilities and services, there are steps that can be taken by any unit to optimise patients' access to interdisciplinary expert care. A series of pragmatic recommendations relating to IBD unit optimisation have been developed through discussion among a large panel of international experts. Suggested recommendations were extracted through systematic search of published evidence and structured requests for expert opinion. Physicians [n = 238] identified as IBD specialists by publications or clinical focus on IBD were invited for discussion and recommendation modification [Barcelona, Spain; 2014]. Final recommendations were voted on by the group. Participants also completed an online survey to evaluate their own experience related to IBD units. A total of 60% of attendees completed the survey, with 15% self-classifying their centre as a dedicated IBD unit. Only half of respondents indicated that they had a defined IBD treatment algorithm in place. Key recommendations included the need to develop a multidisciplinary team covering specifically-defined specialist expertise in IBD, to instil processes that facilitate cross-functional communication and to invest in shared care models of IBD management. Optimising the setup of IBD units will require progressive leadership and willingness to challenge the status quo in order to provide better quality of care for our patients. IBD units are an important step towards harmonising care for IBD across Europe and for establishing standards for disease management programmes. © European Crohn’s and Colitis Organisation 2015.

  17. Optimising the Inflammatory Bowel Disease Unit to Improve Quality of Care: Expert Recommendations

    PubMed Central

    Dotan, Iris; Ghosh, Subrata; Mlynarsky, Liat; Reenaers, Catherine; Schreiber, Stefan

    2015-01-01

    Introduction: The best care setting for patients with inflammatory bowel disease [IBD] may be in a dedicated unit. Whereas not all gastroenterology units have the same resources to develop dedicated IBD facilities and services, there are steps that can be taken by any unit to optimise patients’ access to interdisciplinary expert care. A series of pragmatic recommendations relating to IBD unit optimisation have been developed through discussion among a large panel of international experts. Methods: Suggested recommendations were extracted through systematic search of published evidence and structured requests for expert opinion. Physicians [n = 238] identified as IBD specialists by publications or clinical focus on IBD were invited for discussion and recommendation modification [Barcelona, Spain; 2014]. Final recommendations were voted on by the group. Participants also completed an online survey to evaluate their own experience related to IBD units. Results: A total of 60% of attendees completed the survey, with 15% self-classifying their centre as a dedicated IBD unit. Only half of respondents indicated that they had a defined IBD treatment algorithm in place. Key recommendations included the need to develop a multidisciplinary team covering specifically-defined specialist expertise in IBD, to instil processes that facilitate cross-functional communication and to invest in shared care models of IBD management. Conclusions: Optimising the setup of IBD units will require progressive leadership and willingness to challenge the status quo in order to provide better quality of care for our patients. IBD units are an important step towards harmonising care for IBD across Europe and for establishing standards for disease management programmes. PMID:25987349

  18. Fabrication of anti-adhesion surfaces on aluminium substrates of rubber plastic moulds using electrolysis plasma treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jianbing, E-mail: jianbingmeng@126.com; Dong, Xiaojuan; Wei, Xiuting

    An anti-adhesion surface with a water contact angle of 167° was fabricated on aluminium samples of rubber plastic moulds by electrolysis plasma treatment using mixed electrolytes of C{sub 6}H{sub 5}O{sub 7}(NH{sub 4}){sub 3} and Na{sub 2}SO{sub 4}, followed by fluorination. To optimise the fabrication conditions, several important processing parameters such as the discharge voltage, discharge time, concentrations of supporting electrolyte and stearic acid ethanol solution were examined systematically. Using scanning electron microscopy (SEM) to analyse surfaces morphology, micrometer scale pits, and protrusions were found on the surface, with numerous nanometer mastoids contained in the protrusions. These binary micro/nano-scale structures, whichmore » are similar to the micro-structures of soil-burrowing animals, play a critical role in achieving low adhesion properties. Otherwise, the anti-adhesion behaviours of the resulting samples were analysed by the atomic force microscope (AFM), Fourier-transform infrared spectrophotometer (FTIR), electrons probe micro-analyzer (EPMA), optical contact angle meter, digital Vickers microhardness (Hv) tester, and electronic universal testing. The results show that the electrolysis plasma treatment does not require complex processing parameters, using a simple device, and is an environment-friendly and effective method. Under the optimised conditions, the contact angle (CA) for the modified anti-adhesion surface is up to 167°, the sliding angle (SA) is less than 2°, roughness of the sample surface is only 0.409μm. Moreover, the adhesion force and H{sub v} are 0. 9KN and 385, respectively.« less

  19. Optimizing the Determination of Roughness Parameters for Model Urban Canopies

    NASA Astrophysics Data System (ADS)

    Huq, Pablo; Rahman, Auvi

    2018-05-01

    We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.

  20. Non-Contact Surface Roughness Measurement by Implementation of a Spatial Light Modulator

    PubMed Central

    Aulbach, Laura; Salazar Bloise, Félix; Lu, Min; Koch, Alexander W.

    2017-01-01

    The surface structure, especially the roughness, has a significant influence on numerous parameters, such as friction and wear, and therefore estimates the quality of technical systems. In the last decades, a broad variety of surface roughness measurement methods were developed. A destructive measurement procedure or the lack of feasibility of online monitoring are the crucial drawbacks of most of these methods. This article proposes a new non-contact method for measuring the surface roughness that is straightforward to implement and easy to extend to online monitoring processes. The key element is a liquid-crystal-based spatial light modulator, integrated in an interferometric setup. By varying the imprinted phase of the modulator, a correlation between the imprinted phase and the fringe visibility of an interferogram is measured, and the surface roughness can be derived. This paper presents the theoretical approach of the method and first simulation and experimental results for a set of surface roughnesses. The experimental results are compared with values obtained by an atomic force microscope and a stylus profiler. PMID:28294990

  1. A rough set-based association rule approach implemented on a brand trust evaluation model

    NASA Astrophysics Data System (ADS)

    Liao, Shu-Hsien; Chen, Yin-Ju

    2017-09-01

    In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.

  2. Physiological effects and optimisation of nasal assist-control ventilation for patients with chronic obstructive pulmonary disease in respiratory failure

    PubMed Central

    Girault, C.; Chevron, V.; Richard, J. C.; Daudenthun, I.; Pasquis, P.; Leroy, J.; Bonmarchand, G.

    1997-01-01

    BACKGROUND: A study was undertaken to investigate the effects of non- invasive assist-control ventilation (ACV) by nasal mask on respiratory physiological parameters and comfort in acute on chronic respiratory failure (ACRF). METHODS: Fifteen patients with chronic obstructive pulmonary disease (COPD) were prospectively and randomly assigned to two non-invasive ventilation (NIV) sequences in spontaneous breathing (SB) and ACV mode. ACV settings were always optimised and therefore subsequently adjusted according to patient's tolerance and air leaks. RESULTS: ACV significantly decreased all the total inspiratory work of breathing (WOBinsp) parameters, pressure time product, and oesophageal pressure variation in comparison with SB mode. The ACV mode also resulted in a significant reduction in surface diaphragmatic electromyographic activity to 36% of the control values and significantly improved the breathing pattern. SB did not change the arterial blood gas tensions from baseline values whereas ACV significantly improved both the PaO2 from a mean (SD) of 8.45 (2.95) kPa to 13.31 (2.15) kPa, PaCO2 from 9.52 (1.61) kPa to 7.39 (1.39) kPa, and the pH from 7.32 (0.03) to 7.40 (0.07). The respiratory comfort was significantly lower with ACV than with SB. CONCLUSIONS: This study shows that the clinical benefit of non-invasive ACV in the management of ACRF in patients with COPD results in a reduced inspiratory muscle activity providing an improvement in breathing pattern and gas exchange. Despite respiratory discomfort, the muscle rest provided appears sufficient when ACV settings are optimised. 


 PMID:9337827

  3. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  4. Rough Set Soft Computing Cancer Classification and Network: One Stone, Two Birds

    PubMed Central

    Zhang, Yue

    2010-01-01

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article. PMID:20706619

  5. Rough sets and Laplacian score based cost-sensitive feature selection

    PubMed Central

    Yu, Shenglong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884

  6. City traffic flow breakdown prediction based on fuzzy rough set

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Da-wei, Hu; Bing, Su; Duo-jia, Zhang

    2017-05-01

    In city traffic management, traffic breakdown is a very important issue, which is defined as a speed drop of a certain amount within a dense traffic situation. In order to predict city traffic flow breakdown accurately, in this paper, we propose a novel city traffic flow breakdown prediction algorithm based on fuzzy rough set. Firstly, we illustrate the city traffic flow breakdown problem, in which three definitions are given, that is, 1) Pre-breakdown flow rate, 2) Rate, density, and speed of the traffic flow breakdown, and 3) Duration of the traffic flow breakdown. Moreover, we define a hazard function to represent the probability of the breakdown ending at a given time point. Secondly, as there are many redundant and irrelevant attributes in city flow breakdown prediction, we propose an attribute reduction algorithm using the fuzzy rough set. Thirdly, we discuss how to predict the city traffic flow breakdown based on attribute reduction and SVM classifier. Finally, experiments are conducted by collecting data from I-405 Freeway, which is located at Irvine, California. Experimental results demonstrate that the proposed algorithm is able to achieve lower average error rate of city traffic flow breakdown prediction.

  7. Rough sets and Laplacian score based cost-sensitive feature selection.

    PubMed

    Yu, Shenglong; Zhao, Hong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.

  8. ROMI-3: Rough-Mill Simulator Version 3.0: User's Guide

    Treesearch

    Joel M. Weiss; R. Edward Thomas; R. Edward Thomas

    2005-01-01

    ROMI-3 Rough-Mill Simulator is a software package that simulates current industrial practices for rip-first and chop-first lumber processing. This guide shows the user how to set up and examine the results of simulations of current or proposed mill practices. ROMI-3 accepts cutting bills with as many as 600 combined solid and/or panel part sizes. Plots of processed...

  9. Role of roughness parameters on the tribology of randomly nano-textured silicon surface.

    PubMed

    Gualtieri, E; Pugno, N; Rota, A; Spagni, A; Lepore, E; Valeri, S

    2011-10-01

    This experimental work is oriented to give a contribution to the knowledge of the relationship among surface roughness parameters and tribological properties of lubricated surfaces; it is well known that these surface properties are strictly related, but a complete comprehension of such correlations is still far to be reached. For this purpose, a mechanical polishing procedure was optimized in order to induce different, but well controlled, morphologies on Si(100) surfaces. The use of different abrasive papers and slurries enabled the formation of a wide spectrum of topographical irregularities (from the submicro- to the nano-scale) and a broad range of surface profiles. An AFM-based morphological and topographical campaign was carried out to characterize each silicon rough surface through a set of parameters. Samples were subsequently water lubricated and tribologically characterized through ball-on-disk tribometer measurements. Indeed, the wettability of each surface was investigated by measuring the water droplet contact angle, that revealed a hydrophilic character for all the surfaces, even if no clear correlation with roughness emerged. Nevertheless, this observation brings input to the purpose, as it allows to exclude that the differences in surface profile affect lubrication. So it is possible to link the dynamic friction coefficient of rough Si samples exclusively to the opportune set of surface roughness parameters that can exhaustively describe both height amplitude variations (Ra, Rdq) and profile periodicity (Rsk, Rku, Ic) that influence asperity-asperity interactions and hydrodynamic lift in different ways. For this main reason they cannot be treated separately, but with dependent approach through which it was possible to explain even counter intuitive results: the unexpected decreasing of friction coefficient with increasing Ra is justifiable by a more consistent increasing of kurtosis Rku.

  10. Comparative evaluation of topographical data of dental implant surfaces applying optical interferometry and scanning electron microscopy.

    PubMed

    Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F

    2017-08-01

    Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  11. Gene Selection and Cancer Classification: A Rough Sets Based Approach

    NASA Astrophysics Data System (ADS)

    Sun, Lijun; Miao, Duoqian; Zhang, Hongyun

    Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.

  12. Entropy Based Feature Selection for Fuzzy Set-Valued Information Systems

    NASA Astrophysics Data System (ADS)

    Ahmed, Waseem; Sufyan Beg, M. M.; Ahmad, Tanvir

    2018-06-01

    In Set-valued Information Systems (SIS), several objects contain more than one value for some attributes. Tolerance relation used for handling SIS sometimes leads to loss of certain information. To surmount this problem, fuzzy rough model was introduced. However, in some cases, SIS may contain some real or continuous set-values. Therefore, the existing fuzzy rough model for handling Information system with fuzzy set-values needs some changes. In this paper, Fuzzy Set-valued Information System (FSIS) is proposed and fuzzy similarity relation for FSIS is defined. Yager's relative conditional entropy was studied to find the significance measure of a candidate attribute of FSIS. Later, using these significance values, three greedy forward algorithms are discussed for finding the reduct and relative reduct for the proposed FSIS. An experiment was conducted on a sample population of the real dataset and a comparison of classification accuracies of the proposed FSIS with the existing SIS and single-valued Fuzzy Information Systems was made, which demonstrated the effectiveness of proposed FSIS.

  13. Optimised layout and roadway support planning with integrated intelligent software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouniali, S.; Josien, J.P.; Piguet, J.P.

    1996-12-01

    Experience with knowledge-based systems for Layout planning and roadway support dimensioning is on hand in European coal mining since 1985. The systems SOUT (Support choice and dimensioning, 1989), SOUT 2, PLANANK (planning of bolt-support), Exos (layout planning diagnosis. 1994), Sout 3 (1995) have been developed in close cooperation by CdF{sup 1}. INERIS{sup 2} , EMN{sup 3} (France) and RAG{sup 4}, DMT{sup 5}, TH - Aachen{sup 6} (Germany); ISLSP (Integrated Software for Layout and support planning) development is in progress (completion scheduled for July 1996). This new software technology in combination with conventional programming systems, numerical models and existing databases turnedmore » out to be suited for setting-up an intelligent decision aid for layout and roadway support planning. The system enhances reliability of planning and optimises the safety-to-cost ratio for (1) deformation forecast for roadways in seam and surrounding rocks, consideration of the general position of the roadway in the rock mass (zones of increased pressure, position of operating and mined panels); (2) support dimensioning; (3) yielding arches, rigid arches, porch sets, rigid rings, yielding rings and bolting/shotcreting for drifts; (4) yielding arches, rigid arches and porch sets for roadways in seam; and (5) bolt support for gateroads (assessment of exclusion criteria and calculation of the bolting pattern) bolting of face-end zones (feasibility and safety assessment; stability guarantee).« less

  14. Evaluation of efficacy of metal artefact reduction technique using contrast media in Computed Tomography

    NASA Astrophysics Data System (ADS)

    Yusob, Diana; Zukhi, Jihan; Aziz Tajuddin, Abd; Zainon, Rafidah

    2017-05-01

    The aim of this study was to evaluate the efficacy of metal artefact reduction using contrasts media in Computed Tomography (CT) imaging. A water-based abdomen phantom of diameter 32 cm (adult body size) was fabricated using polymethyl methacrylate (PMMA) material. Three different contrast agents (iodine, barium and gadolinium) were filled in small PMMA tubes and placed inside a water-based PMMA adult abdomen phantom. The orthopedic metal screw was placed in each small PMMA tube separately. These two types of orthopedic metal screw (stainless steel and titanium alloy) were scanned separately. The orthopedic metal crews were scanned with single-energy CT at 120 kV and dual-energy CT at fast kV-switching between 80 kV and 140 kV. The scan modes were set automatically using the current modulation care4Dose setting and the scans were set at different pitch and slice thickness. The use of the contrast media technique on orthopedic metal screws were optimised by using pitch = 0.60 mm, and slice thickness = 5.0 mm. The use contrast media can reduce the metal streaking artefacts on CT image, enhance the CT images surrounding the implants, and it has potential use in improving diagnostic performance in patients with severe metallic artefacts. These results are valuable for imaging protocol optimisation in clinical applications.

  15. Methods of increasing the performance of radionuclide generators used in nuclear medicine: daughter nuclide build-up optimisation, elution-purification-concentration integration, and effective control of radionuclidic purity.

    PubMed

    Le, Van So; Do, Zoe Phuc-Hien; Le, Minh Khoi; Le, Vicki; Le, Natalie Nha-Truc

    2014-06-10

    Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of "early elution schedule" was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement certainty with respect to optimising decay/measurement time and product sample activity used for QC quality control. The optimisation ensures a certainty of measurement of the specific impure radionuclide and avoids wasting the useful amount of valuable purified/concentrated daughter nuclide product. This process is important for the spectrometric measurement of very low activity of impure radionuclide contamination in the radioisotope products of much higher activity used in medical imaging and targeted radiotherapy.

  16. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.

  17. The OECD Handbook for Innovative Learning Environments. Educational Research and Innovation

    ERIC Educational Resources Information Center

    OECD Publishing, 2017

    2017-01-01

    How might we know whether our schools or system are set up to optimise learning? How can we find out whether we are getting the most from technology? How can we evaluate our innovation or think through whether our change initiative will bring about its desired results? Teachers and educational leaders who grapple with such questions will find this…

  18. Model of head-neck joint fast movements in the frontal plane.

    PubMed

    Pedrocchi, A; Ferrigno, G

    2004-06-01

    The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.

  19. Shear Model Development of Limestone Joints with Incorporating Variations of Basic Friction Coefficient and Roughness Components During Shearing

    NASA Astrophysics Data System (ADS)

    Mehrishal, Seyedahmad; Sharifzadeh, Mostafa; Shahriar, Korosh; Song, Jae-Jon

    2017-04-01

    In relation to the shearing of rock joints, the precise and continuous evaluation of asperity interlocking, dilation, and basic friction properties has been the most important task in the modeling of shear strength. In this paper, in order to investigate these controlling factors, two types of limestone joint samples were prepared and CNL direct shear tests were performed on these joints under various shear conditions. One set of samples were travertine and another were onyx marble with slickensided surfaces, surfaces ground to #80, and rough surfaces were tested. Direct shear experiments conducted on slickensided and ground surfaces of limestone indicated that by increasing the applied normal stress, under different shearing rates, the basic friction coefficient decreased. Moreover, in the shear tests under constant normal stress and shearing rate, the basic friction coefficient remained constant for the different contact sizes. The second series of direct shear experiments in this research was conducted on tension joint samples to evaluate the effect of surface roughness on the shear behavior of the rough joints. This paper deals with the dilation and roughness interlocking using a method that characterizes the surface roughness of the joint based on a fundamental combined surface roughness concept. The application of stress-dependent basic friction and quantitative roughness parameters in the continuous modeling of the shear behavior of rock joints is an important aspect of this research.

  20. Temporal Stability of Surface Roughness Effects on Radar Based Soil Moisture Retrieval During the Corn Growth Cycle

    NASA Technical Reports Server (NTRS)

    Joseph, A.T.; Lang, R.; O'Neill, P.E.; van der Velde, R.; Gish, T.

    2008-01-01

    A representative soil surface roughness parameterization needed for the retrieval of soil moisture from active microwave satellite observation is difficult to obtain through either in-situ measurements or remote sensing-based inversion techniques. Typically, for the retrieval of soil moisture, temporal variations in surface roughness are assumed to be negligible. Although previous investigations have suggested that this assumption might be reasonable for natural vegetation covers (Moran et al. 2002, Thoma et al. 2006), insitu measurements over plowed agricultural fields (Callens et al. 2006) have shown that the soil surface roughness can change considerably over time. This paper reports on the temporal stability of surface roughness effects on radar observations and soil moisture retrieved from these radar observations collected once a week during a corn growth cycle (May 10th - October 2002). The data set employed was collected during the Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) field campaign covering this 2002 corn growth cycle and consists of dual-polarized (HH and VV) L-band (1.6 GHz) acquired at view angles of 15, 35, and 55 degrees. Cross-polarized L baud radar data were also collected as part of this experiment, but are not used in the analysis reported on here. After accounting for vegetation effects on radar observations, time-invariant optimum roughness parameters were determined using the Integral Equation Method (IEM) and radar observations acquired over bare soil and cropped conditions (the complete radar data set includes entire corn growth cycle). The optimum roughness parameters, soil moisture retrieval uncertainty, temporal distribution of retrieval errors and its relationship with the weather conditions (e.g. rainfall and wind speed) have been analyzed. It is shown that over the corn growth cycle, temporal roughness variations due to weathering by rain are responsible for almost 50% of soil moisture retrieval uncertainty depending on the sensing configuration. The effects of surface roughness variations are found to be smallest for observations acquired at a view angle of 55 degrees and HH polarization. A possible explanation for this result is that at 55 degrees and HH polarization the effect of vertical surface height changes on the observed radar response are limited because the microwaves travel parallel to the incident plane and as a result will not interact directly with vertically oriented soil structures.

  1. Effect of Blade-surface Finish on Performance of a Single-stage Axial-flow Compressor

    NASA Technical Reports Server (NTRS)

    Moses, Jason J; Serovy, George, K

    1951-01-01

    A set of modified NACA 5509-34 rotor and stator blades was investigated with rough-machine, hand-filed, and highly polished surface finishes over a range of weight flows at six equivalent tip speeds from 672 to 1092 feet per second to determine the effect of blade-surface finish on the performance of a single-stage axial-flow compressor. Surface-finish effects decreased with increasing compressor speed and with decreasing flow at a given speed. In general, finishing blade surfaces below the roughness that may be considered aerodynamically smooth on the basis of an admissible-roughness formula will have no effect on compressor performance.

  2. Signal Processing Methods for Liquid Rocket Engine Combustion Spontaneous Stability and Rough Combustion Assessments

    NASA Technical Reports Server (NTRS)

    Kenny, R. Jeremy; Casiano, Matthew; Fischbach, Sean; Hulka, James R.

    2012-01-01

    Liquid rocket engine combustion stability assessments are traditionally broken into three categories: dynamic stability, spontaneous stability, and rough combustion. This work focuses on comparing the spontaneous stability and rough combustion assessments for several liquid engine programs. The techniques used are those developed at Marshall Space Flight Center (MSFC) for the J-2X Workhorse Gas Generator program. Stability assessment data from the Integrated Powerhead Demonstrator (IPD), FASTRAC, and Common Extensible Cryogenic Engine (CECE) programs are compared against previously processed J-2X Gas Generator data. Prior metrics for spontaneous stability assessments are updated based on the compilation of all data sets.

  3. Float polishing of optical materials.

    PubMed

    Bennett, J M; Shaffer, J J; Shibano, Y; Namba, Y

    1987-02-15

    The float-polishing technique has been studied to determine its suitability for producing supersmooth surfaces on optical materials, yielding a roughness of <2 A rms. An attempt was made to polish six different materials including fused quartz, Zerodur, and sapphire. The low surface roughness was achieved on fused quartz, Zerodur, and Corning experimental glass-ceramic materials, and a surface roughness of <1 A rms was obtained on O-cut single-crystal sapphire. Presumably, similar surface finishes can also be obtained on CerVit and ULE quartz, which could not be polished satisfactorily in this set of experiments because of a mismatch between sample mounting and machine configuration.

  4. Gender differences in visuospatial planning: an eye movements study.

    PubMed

    Cazzato, Valentina; Basso, Demis; Cutini, Simone; Bisiacchi, Patrizia

    2010-01-20

    Gender studies report a male advantage in several visuospatial abilities. Only few studies however, have evaluated differences in visuospatial planning behaviour with regard to gender. This study was aimed at exploring whether gender may affect the choice of cognitive strategies in a visuospatial planning task and, if oculomotor measures could assist in disentangling the cognitive processes involved. A computerised task based on the travelling salesperson problem paradigm, the Maps test, was used to investigate these issues. Participants were required to optimise time and space of a path travelling among a set of sub-goals in a spatially constrained environment. Behavioural results suggest that there are no gender differences in the initial visual processing of the stimuli, but rather during the execution of the plan, with males showing a shorter execution time and a higher path length optimisation than females. Males often showed changes of heuristics during the execution while females seemed to prefer a constant strategy. Moreover, a better performance in behavioural and oculomotor measures seemed to suggest that males are more able than females in either the optimisation of spatial features or the realisation of the planned scheme. Despite inconclusive findings, the results support previous research and provide insight into the level of cognitive processing involved in navigation and planning tasks, with regard to the influence of gender.

  5. Optimising μCT imaging of the middle and inner cat ear.

    PubMed

    Seifert, H; Röher, U; Staszyk, C; Angrisani, N; Dziuba, D; Meyer-Lindenberg, A

    2012-04-01

    This study's aim was to determine the optimal scan parameters for imaging the middle and inner ear of the cat with micro-computertomography (μCT). Besides, the study set out to assess whether adequate image quality can be obtained to use μCT in diagnostics and research on cat ears. For optimisation, μCT imaging of two cat skull preparations was performed using 36 different scanning protocols. The μCT-scans were evaluated by four experienced experts with regard to the image quality and detail detectability. By compiling a ranking of the results, the best possible scan parameters could be determined. From a third cat's skull, a μCT-scan, using these optimised scan parameters, and a comparative clinical CT-scan were acquired. Afterwards, histological specimens of the ears were produced which were compared to the μCT-images. The comparison shows that the osseous structures are depicted in detail. Although soft tissues cannot be differentiated, the osseous structures serve as valuable spatial orientation of relevant nerves and muscles. Clinical CT can depict many anatomical structures which can also be seen on μCT-images, but these appear a lot less sharp and also less detailed than with μCT. © 2011 Blackwell Verlag GmbH.

  6. Fractures in sport: Optimising their management and outcome

    PubMed Central

    Robertson, Greg AJ; Wood, Alexander M

    2015-01-01

    Fractures in sport are a specialised cohort of fracture injuries, occurring in a high functioning population, in which the goals are rapid restoration of function and return to play with the minimal symptom profile possible. While the general principles of fracture management, namely accurate fracture reduction, appropriate immobilisation and timely rehabilitation, guide the treatment of these injuries, management of fractures in athletic populations can differ significantly from those in the general population, due to the need to facilitate a rapid return to high demand activities. However, despite fractures comprising up to 10% of all of sporting injuries, dedicated research into the management and outcome of sport-related fractures is limited. In order to assess the optimal methods of treating such injuries, and so allow optimisation of their outcome, the evidence for the management of each specific sport-related fracture type requires assessment and analysis. We present and review the current evidence directing management of fractures in athletes with an aim to promote valid innovative methods and optimise the outcome of such injuries. From this, key recommendations are provided for the management of the common fracture types seen in the athlete. Six case reports are also presented to illustrate the management planning and application of sport-focussed fracture management in the clinical setting. PMID:26716081

  7. Simulation studies promote technological development of radiofrequency phased array hyperthermia.

    PubMed

    Wust, P; Seebass, M; Nadobny, J; Deuflhard, P; Mönich, G; Felix, R

    1996-01-01

    A treatment planning program package for radiofrequency hyperthermia has been developed. It consists of software modules for processing three-dimensional computerized tomography (CT) data sets, manual segmentation, generation of tetrahedral grids, numerical calculation and optimisation of three-dimensional E field distributions using a volume surface integral equation algorithm as well as temperature distributions using an adaptive multilevel finite-elements code, and graphical tools for simultaneous representation of CT data and simulation results. Heat treatments are limited by hot spots in healthy tissues caused by E field maxima at electrical interfaces (bone/muscle). In order to reduce or avoid hot spots suitable objective functions are derived from power deposition patterns and temperature distributions, and are utilised to optimise antenna parameters (phases, amplitudes). The simulation and optimisation tools have been applied to estimate the improvements that could be reached by upgrades of the clinically used SIGMA-60 applicator (consisting of a single ring of four antenna pairs). The investigated upgrades are increased number of antennas and channels (triple-ring of 3 x 8 antennas and variation of antenna inclination. Significant improvement of index temperatures (1-2 degrees C) is achieved by upgrading the single ring to a triple ring with free phase selection for every antenna or antenna pair. Antenna amplitudes and inclinations proved as less important parameters.

  8. Comparing approaches for using climate projections in assessing water resources investments for systems with multiple stakeholder groups

    NASA Astrophysics Data System (ADS)

    Hurford, Anthony; Harou, Julien

    2015-04-01

    Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.

  9. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  10. Effect of laser parameters on surface roughness of laser modified tool steel after thermal cyclic loading

    NASA Astrophysics Data System (ADS)

    Lau Sheng, Annie; Ismail, Izwan; Nur Aqida, Syarifah

    2018-03-01

    This study presents the effects of laser parameters on the surface roughness of laser modified tool steel after thermal cyclic loading. Pulse mode Nd:YAG laser was used to perform the laser surface modification process on AISI H13 tool steel samples. Samples were then treated with thermal cyclic loading experiments which involved alternate immersion in molten aluminium (800°C) and water (27°C) for 553 cycles. A full factorial design of experiment (DOE) was developed to perform the investigation. Factors for the DOE are the laser parameter namely overlap rate (η), pulse repetition frequency (f PRF) and peak power (Ppeak ) while the response is the surface roughness after thermal cyclic loading. Results indicate the surface roughness of the laser modified surface after thermal cyclic loading is significantly affected by laser parameter settings.

  11. Optimized auxiliary basis sets for density fitted post-Hartree-Fock calculations of lanthanide containing molecules

    NASA Astrophysics Data System (ADS)

    Chmela, Jiří; Harding, Michael E.

    2018-06-01

    Optimised auxiliary basis sets for lanthanide atoms (Ce to Lu) for four basis sets of the Karlsruhe error-balanced segmented contracted def2 - series (SVP, TZVP, TZVPP and QZVPP) are reported. These auxiliary basis sets enable the use of the resolution-of-the-identity (RI) approximation in post Hartree-Fock methods - as for example, second-order perturbation theory (MP2) and coupled cluster (CC) theory. The auxiliary basis sets are tested on an enlarged set of about a hundred molecules where the test criterion is the size of the RI error in MP2 calculations. Our tests also show that the same auxiliary basis sets can be used together with different effective core potentials. With these auxiliary basis set calculations of MP2 and CC quality can now be performed efficiently on medium-sized molecules containing lanthanides.

  12. Casemix Funding Optimisation: Working Together to Make the Most of Every Episode.

    PubMed

    Uzkuraitis, Carly; Hastings, Karen; Torney, Belinda

    2010-10-01

    Eastern Health, a large public Victorian Healthcare network, conducted a WIES optimisation audit across the casemix-funded sites for separations in the 2009/2010 financial year. The audit was conducted using existing staff resources and resulted in a significant increase in casemix funding at a minimal cost. The audit showcased the skill set of existing staff and resulted in enormous benefits to the coding and casemix team by demonstrating the value of the combination of skills that makes clinical coders unique. The development of an internal web-based application allowed accurate and timely reporting of the audit results, providing the basis for a restructure of the coding and casemix service, along with approval for additional staffing resources and inclusion of a regular auditing program to focus on the creation of high quality data for research, health services management and financial reimbursement.

  13. Quantitative structure activity relationships from optimised ab initio bond lengths: steroid binding affinity and antibacterial activity of nitrofuran derivatives

    NASA Astrophysics Data System (ADS)

    Smith, P. J.; Popelier, P. L. A.

    2004-02-01

    The present day abundance of cheap computing power enables the use of quantum chemical ab initio data in Quantitative Structure-Activity Relationships (QSARs). Optimised bond lengths are a new such class of descriptors, which we have successfully used previously in representing electronic effects in medicinal and ecological QSARs (enzyme inhibitory activity, hydrolysis rate constants and pKas). Here we use AM1 and HF/3-21G* bond lengths in conjunction with Partial Least Squares (PLS) and a Genetic Algorithm (GA) to predict the Corticosteroid-Binding Globulin (CBG) binding activity of the classic steroid data set, and the antibacterial activity of nitrofuran derivatives. The current procedure, which does not require molecular alignment, produces good r2 and q2 values. Moreover, it highlights regions in the common steroid skeleton deemed relevant to the active regions of the steroids and nitrofuran derivatives.

  14. Shape and energy consistent pseudopotentials for correlated electron systems

    PubMed Central

    Needs, R. J.

    2017-01-01

    A method is developed for generating pseudopotentials for use in correlated-electron calculations. The paradigms of shape and energy consistency are combined and defined in terms of correlated-electron wave-functions. The resulting energy consistent correlated electron pseudopotentials (eCEPPs) are constructed for H, Li–F, Sc–Fe, and Cu. Their accuracy is quantified by comparing the relaxed molecular geometries and dissociation energies which they provide with all electron results, with all quantities evaluated using coupled cluster singles, doubles, and triples calculations. Errors inherent in the pseudopotentials are also compared with those arising from a number of approximations commonly used with pseudopotentials. The eCEPPs provide a significant improvement in optimised geometries and dissociation energies for small molecules, with errors for the latter being an order-of-magnitude smaller than for Hartree-Fock-based pseudopotentials available in the literature. Gaussian basis sets are optimised for use with these pseudopotentials. PMID:28571391

  15. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  16. A study of lateral fall-off (penumbra) optimisation for pencil beam scanning (PBS) proton therapy

    NASA Astrophysics Data System (ADS)

    Winterhalter, C.; Lomax, A.; Oxley, D.; Weber, D. C.; Safai, S.

    2018-01-01

    The lateral fall-off is crucial for sparing organs at risk in proton therapy. It is therefore of high importance to minimize the penumbra for pencil beam scanning (PBS). Three optimisation approaches are investigated: edge-collimated uniformly weighted spots (collimation), pencil beam optimisation of uncollimated pencil beams (edge-enhancement) and the optimisation of edge collimated pencil beams (collimated edge-enhancement). To deliver energies below 70 MeV, these strategies are evaluated in combination with the following pre-absorber methods: field specific fixed thickness pre-absorption (fixed), range specific, fixed thickness pre-absorption (automatic) and range specific, variable thickness pre-absorption (variable). All techniques are evaluated by Monte Carlo simulated square fields in a water tank. For a typical air gap of 10 cm, without pre-absorber collimation reduces the penumbra only for water equivalent ranges between 4-11 cm by up to 2.2 mm. The sharpest lateral fall-off is achieved through collimated edge-enhancement, which lowers the penumbra down to 2.8 mm. When using a pre-absorber, the sharpest fall-offs are obtained when combining collimated edge-enhancement with a variable pre-absorber. For edge-enhancement and large air gaps, it is crucial to minimize the amount of material in the beam. For small air gaps however, the superior phase space of higher energetic beams can be employed when more material is used. In conclusion, collimated edge-enhancement combined with the variable pre-absorber is the recommended setting to minimize the lateral penumbra for PBS. Without collimator, it would be favourable to use a variable pre-absorber for large air gaps and an automatic pre-absorber for small air gaps.

  17. Optimisation of a double-centrifugation method for preparation of canine platelet-rich plasma.

    PubMed

    Shin, Hyeok-Soo; Woo, Heung-Myong; Kang, Byung-Jae

    2017-06-26

    Platelet-rich plasma (PRP) has been expected for regenerative medicine because of its growth factors. However, there is considerable variability in the recovery and yield of platelets and the concentration of growth factors in PRP preparations. The aim of this study was to identify optimal relative centrifugal force and spin time for the preparation of PRP from canine blood using a double-centrifugation tube method. Whole blood samples were collected in citrate blood collection tubes from 12 healthy beagles. For the first centrifugation step, 10 different run conditions were compared to determine which condition produced optimal recovery of platelets. Once the optimal condition was identified, platelet-containing plasma prepared using that condition was subjected to a second centrifugation to pellet platelets. For the second centrifugation, 12 different run conditions were compared to identify the centrifugal force and spin time to produce maximal pellet recovery and concentration increase. Growth factor levels were estimated by using ELISA to measure platelet-derived growth factor-BB (PDGF-BB) concentrations in optimised CaCl 2 -activated platelet fractions. The highest platelet recovery rate and yield were obtained by first centrifuging whole blood at 1000 g for 5 min and then centrifuging the recovered platelet-enriched plasma at 1500 g for 15 min. This protocol recovered 80% of platelets from whole blood and increased platelet concentration six-fold and produced the highest concentration of PDGF-BB in activated fractions. We have described an optimised double-centrifugation tube method for the preparation of PRP from canine blood. This optimised method does not require particularly expensive equipment or high technical ability and can readily be carried out in a veterinary clinical setting.

  18. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  19. Nurse strategies for optimising patient participation in nursing care.

    PubMed

    Sahlsten, Monika J M; Larsson, Inga E; Sjöström, Björn; Plos, Kaety A E

    2009-09-01

    THE STUDY'S RATIONALE: Patient participation is an essential factor in nursing care and medical treatment and a legal right in many countries. Despite this, patients have experienced insufficient participation, inattention and neglect regarding their problems and may respond with dependence, passivity or taciturnity. Accordingly, nurses strategies for optimising patient participation in nursing care is an important question for the nursing profession. The aim was to explore Registered Nurses' strategies to stimulate and optimise patient participation in nursing care. The objective was to identify ward nurses' supporting practices. A qualitative research approach was applied. Three focus groups with experienced Registered Nurses providing inpatient somatic care (n = 16) were carried out. These nurses were recruited from three hospitals in West Sweden. The data were analysed using content analysis technique. The ethics of scientific work was adhered to. According to national Swedish legislation, no formal permit from an ethics committee was required. The participants gave informed consent after verbal and written information. Nurse strategies for optimising patient participation in nursing care were identified as three categories: 'Building close co-operation', 'Getting to know the person' and 'Reinforcing self-care capacity' and their 10 subcategories. The strategies point to a process of emancipation of the patient's potential by finding his/her own inherent knowledge, values, motivation and goals and linking these to actions. Nurses need to strive for guiding the patient towards attaining meaningful experiences, discoveries, learning and development. The strategies are important and useful to balance the asymmetry in the nurse-patient relationship in daily nursing practice and also in quality assurance to evaluate and improve patient participation and in education. However, further verification of the findings is recommended by means of replication or other studies in different clinical settings. © 2009 The Authors. Journal compilation © 2009 Nordic College of Caring Science.

  20. Production of biosolid fuels from municipal sewage sludge: Technical and economic optimisation.

    PubMed

    Wzorek, Małgorzata; Tańczuk, Mariusz

    2015-08-01

    The article presents the technical and economic analysis of the production of fuels from municipal sewage sludge. The analysis involved the production of two types of fuel compositions: sewage sludge with sawdust (PBT fuel) and sewage sludge with meat and bone meal (PBM fuel). The technology of the production line of these sewage fuels was proposed and analysed. The main objective of the study is to find the optimal production capacity. The optimisation analysis was performed for the adopted technical and economic parameters under Polish conditions. The objective function was set as a maximum of the net present value index and the optimisation procedure was carried out for the fuel production line input capacity from 0.5 to 3 t h(-1), using the search step 0.5 t h(-1). On the basis of technical and economic assumptions, economic efficiency indexes of the investment were determined for the case of optimal line productivity. The results of the optimisation analysis show that under appropriate conditions, such as prices of components and prices of produced fuels, the production of fuels from sewage sludge can be profitable. In the case of PBT fuel, calculated economic indexes show the best profitability for the capacity of a plant over 1.5 t h(-1) output, while production of PBM fuel is beneficial for a plant with the maximum of searched capacities: 3.0 t h(-1). Sensitivity analyses carried out during the investigation show that influence of both technical and economic assessments on the location of maximum of objective function (net present value) is significant. © The Author(s) 2015.

  1. Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction

    NASA Astrophysics Data System (ADS)

    Terlecki, Pawel; Walczak, Krzysztof

    This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.

  2. Modification of surface morphology of Ti6Al4V alloy manufactured by Laser Sintering

    NASA Astrophysics Data System (ADS)

    Draganovská, Dagmar; Ižariková, Gabriela; Guzanová, Anna; Brezinová, Janette; Koncz, Juraj

    2016-06-01

    The paper deals with the evaluation of relation between roughness parameters of Ti6Al4V alloy produced by DMLS and modified by abrasive blasting. There were two types of blasting abrasives that were used - white corundum and Zirblast at three levels of air pressure. The effect of pressure on the value of individual roughness parameters and an influence of blasting media on the parameters for samples blasted by white corundum and Zirblast were evaluated by ANOVA. Based on the measured values, the correlation matrix was set and the standard of correlation statistic importance between the monitored parameters was determined from it. The correlation coefficient was also set.

  3. Exploring KM Features of High-Performance Companies

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Wen

    2007-12-01

    For reacting to an increasingly rival business environment, many companies emphasize the importance of knowledge management (KM). It is a favorable way to explore and learn KM features of high-performance companies. However, finding out the critical KM features of high-performance companies is a qualitative analysis problem. To handle this kind of problem, the rough set approach is suitable because it is based on data-mining techniques to discover knowledge without rigorous statistical assumptions. Thus, this paper explored KM features of high-performance companies by using the rough set approach. The results show that high-performance companies stress the importance on both tacit and explicit knowledge, and consider that incentives and evaluations are the essentials to implementing KM.

  4. Recruitment and retention. The Derby Theatre Project experience.

    PubMed

    Ainsworth, David

    2003-10-01

    The National Theatre Project was set up in March 2001 by the Modernisation Agency to improve the patient and carer experience, improve employee satisfaction, optimise theatre utilisation and reduce cancelled operations. This is the second article in the series where David Ainsworth, manager of a pilot site project in Derby, describes issues around the Theatre Project. This month the focus is on recruitment, retention and staff morale.

  5. Comparative histomorphometry and resonance frequency analysis of implants with moderately rough surfaces in a loaded animal model.

    PubMed

    Al-Nawas, B; Groetz, K A; Goetz, H; Duschner, H; Wagner, W

    2008-01-01

    Test of favourable conditions for osseointegration with respect to optimum bone-implant contact (BIC) in a loaded animal model. The varied parameters were surface roughness and surface topography of commercially available dental implants. Thirty-two implants of six types of macro and microstructure were included in the study (total 196). The different types were: minimally rough control: Branemark machined Mk III; oxidized surface: TiUnite MkIII and MkIV; ZL Ticer; blasted and etched surface: Straumann SLA; rough control: titanium plasma sprayed (TPS). Sixteen beagle dogs were implanted with the whole set of the above implants. After a healing period of 8 weeks, implants were loaded for 3 months. For the evaluation of the BIC areas, adequately sectioned biopsies were visualized by subsurface scans with confocal laser scanning microscopy (CLSM). The primary statistical analysis testing BIC of the moderately rough implants (mean 56.1+/-13.0%) vs. the minimally rough and the rough controls (mean 53.9+/-11.2%) does not reveal a significant difference (P=0.57). Mean values of 50-70% BIC were found for all implant types. Moderately rough oxidized implants show a median BIC, which is 8% higher than their minimally rough turned counterpart. The intraindividual difference between the TPS and the blasted and etched counterparts revealed no significant difference. The turned and the oxidized implants show median values of the resonance frequency [implant stability quotients (ISQ)] over 60; the nonself-tapping blasted and etched and TPS implants show median values below 60. In conclusion, the benefit of rough surfaces relative to minimally rough ones in this loaded animal model was confirmed histologically. The comparison of different surface treatment modalities revealed no significant differences between the modern moderately rough surfaces. Resonance frequency analysis seems to be influenced in a major part by the transducer used, thus prohibiting the comparison of different implant systems.

  6. Analysis of accuracy in photogrammetric roughness measurements

    NASA Astrophysics Data System (ADS)

    Olkowicz, Marcin; Dąbrowski, Marcin; Pluymakers, Anne

    2017-04-01

    Regarding permeability, one of the most important features of shale gas reservoirs is the effective aperture of cracks opened during hydraulic fracturing, both propped and unpropped. In a propped fracture, the aperture is controlled mostly by proppant size and its embedment, and fracture surface roughness only has a minor influence. In contrast, in an unpropped fracture aperture is controlled by the fracture roughness and the wall displacement. To measure fracture surface roughness, we have used the photogrammetric method since it is time- and cost-efficient. To estimate the accuracy of this method we compare the photogrammetric measurements with reference measurements taken with a White Light Interferometer (WLI). Our photogrammetric setup is based on high resolution 50 Mpx camera combined with a focus stacking technique. The first step for photogrammetric measurements is to determine the optimal camera positions and lighting. We compare multiple scans of one sample, taken with different settings of lighting and camera positions, with the reference WLI measurement. The second step is to perform measurements of all studied fractures with the parameters that produced the best results in the first step. To compare photogrammetric and WLI measurements we regrid both data sets onto a regular 10 μm grid and determined the best fit, followed by a calculation of the difference between the measurements. The first results of the comparison show that for 90 % of measured points the absolute vertical distance between WLI and photogrammetry is less than 10 μm, while the mean absolute vertical distance is 5 μm. This proves that our setup can be used for fracture roughness measurements in shales.

  7. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  8. Evaluation of carboxymethyl moringa gum as nanometric carrier.

    PubMed

    Rimpy; Abhishek; Ahuja, Munish

    2017-10-15

    In the present study, carboxymethylation of Moringa oleifera gum was carried out by reacting with monochloroacetic acid. Modified gum was characterised employing Fourier-transform infrared spectroscopy, differential scanning calorimetry, X-ray diffraction, scanning electron microscopy, and Rheology study. The carboxymethyl modification of moringa gum was found to increase its degree of crystallinity, reduce viscosity and swelling, increase the surface roughness and render its more anionic. The interaction between carboxymethyl moringa gum and chitosan was optimised by 2-factor, 3-level central composite experimental design to prepare polyelectrolyte nanoparticle using ofloxacin, as a model drug. The optimal calculated parameters were found to be carboxymethyl moringa gum- 0.016% (w/v), chitosan- 0.012% (w/v) which provided polyelectrolyte nanoparticle of average particle size 231nm and zeta potential 28mV. Carboxymethyl moringa gum-chitosan polyelectrolyte nanoparticles show sustained in vitro release of ofloxacin upto 6h which followed first order kinetics with mechanism of release being erosion of polymer matrix. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Counterintuitive effects of substrate roughness on PDCs

    NASA Astrophysics Data System (ADS)

    Andrews, B. J.; Manga, M.

    2012-12-01

    We model dilute pyroclastic density currents (PDCs) using scaled, warm, particle-laden density currents in a 6 m long, 0.6 m wide, 1.8 m tall air-filled tank. In this set of experiments, we run currents over substrates with characteristic roughness scales, hr, ranging over ~3 orders of magnitude from smooth, through 250 μm sandpaper, 0.1-, 1-, 2-, 5-, and 10 cm hemispheres. As substrate roughness increases, runout distance increases until a critical roughness height, hrc, is reached; further increases in roughness height decrease runout. The critical roughness height appears to be 0.25-0.5 htb, the thickness of the turbulent lower layer of the density currents. The dependence of runout on hr is most likely the result of increases in substrate roughness decreasing the average current velocity and converting that energy into increased turbulence intensity. Small values of hr thus result in increased runout as sedimentation is inhibited by the increased turbulence intensity. At larger values of hr current behavior is controlled by much larger decreases in average current velocity, even though sedimentation decreases. Scaling our experiments up to the size of real volcanic eruptions suggests that landscapes must have characteristic roughness hr>10 m to reduce the runout of natural PDCs, smaller roughness scales can increase runout. Comparison of relevant bulk (Reynolds number, densimetric and thermal Richardson numbers, excess buoyant thermal energy density) and turbulent (Stokes and settling numbers) between our experiments and natural dilute PDCs indicates that we are accurately modeling at least the large scale behaviors and dynamics of dilute PDCs.

  10. Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.

    PubMed

    Cheng, Ching-Hsue; Liu, Wei-Xiang

    2018-05-28

    Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.

  11. Effects of setting under air pressure on the number of surface pores and irregularities of dental investment materials.

    PubMed

    Tourah, Anita; Moshaverinia, Alireza; Chee, Winston W

    2014-02-01

    Surface roughness and irregularities are important properties of dental investment materials that can affect the fit of a restoration. Whether setting under air pressure affects the surface irregularities of gypsum-bonded and phosphate-bonded investment materials is unknown. The purpose of this study was to investigate the effect of air pressure on the pore size and surface irregularities of investment materials immediately after pouring. Three dental investments, 1 gypsum-bonded investment and 2 phosphate-bonded investments, were investigated. They were vacuum mixed according to the manufacturers' recommendations, then poured into a ringless casting system. The prepared specimens were divided into 2 groups: 1 bench setting and the other placed in a pressure pot at 172 kPa. After 45 minutes of setting, the rings were removed and the investments were cut at a right angle to the long axis with a diamond disk. The surfaces of the investments were steam cleaned, dried with an air spray, and observed with a stereomicroscope. A profilometer was used to evaluate the surface roughness (μm) of the castings. The number of surface pores was counted for 8 specimens from each group and the means and standard deviations were reported. Two-way ANOVA was used to compare the data. Specimens that set under atmospheric air pressure had a significantly higher number of pores than specimens that set under increased pressure (P<.05). No statistically significant differences for surface roughness were found (P=.078). Also, no significant difference was observed among the 3 different types of materials tested (P>.05). Specimens set under positive pressure in a pressure chamber presented fewer surface bubbles than specimens set under atmospheric pressure. Positive pressure is effective and, therefore, is recommended for both gypsum-bonded and phosphate-bonded investment materials. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  12. Simultaneous feature selection and parameter optimisation using an artificial ant colony: case study of melting point prediction

    PubMed Central

    O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John BO

    2008-01-01

    Background We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024–1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581–590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Results Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6°C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, ε of 0.21) and an RMSE of 45.1°C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3°C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5°C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. Conclusion With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors. PMID:18959785

  13. Explicit reference governor for linear systems

    NASA Astrophysics Data System (ADS)

    Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo

    2018-06-01

    The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.

  14. Radiative transfer simulations of the two-dimensional ocean glint reflectance and determination of the sea surface roughness.

    PubMed

    Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut

    2016-02-20

    An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.

  15. Radiative Transfer Simulations of the Two-Dimensional Ocean Glint Reflectance and Determination of the Sea Surface Roughness

    NASA Technical Reports Server (NTRS)

    Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut

    2016-01-01

    An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.

  16. Quantitative three-dimensional ice roughness from scanning electron microscopy

    NASA Astrophysics Data System (ADS)

    Butterfield, Nicholas; Rowe, Penny M.; Stewart, Emily; Roesel, David; Neshyba, Steven

    2017-03-01

    We present a method for inferring surface morphology of ice from scanning electron microscope images. We first develop a novel functional form for the backscattered electron intensity as a function of ice facet orientation; this form is parameterized using smooth ice facets of known orientation. Three-dimensional representations of rough surfaces are retrieved at approximately micrometer resolution using Gauss-Newton inversion within a Bayesian framework. Statistical analysis of the resulting data sets permits characterization of ice surface roughness with a much higher statistical confidence than previously possible. A survey of results in the range -39°C to -29°C shows that characteristics of the roughness (e.g., Weibull parameters) are sensitive not only to the degree of roughening but also to the symmetry of the roughening. These results suggest that roughening characteristics obtained by remote sensing and in situ measurements of atmospheric ice clouds can potentially provide more facet-specific information than has previously been appreciated.

  17. Cleaning of optical surfaces by capacitively coupled RF discharge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, P. K., E-mail: praveenyadav@rrcat.gov.in; Rai, S. K.; Nayak, M.

    2014-04-24

    In this paper, we report cleaning of carbon capped molybdenum (Mo) thin film by in-house developed radio frequency (RF) plasma reactor, at different powers and exposure time. Carbon capped Mo films were exposed to oxygen plasma for different durations at three different power settings, at a constant pressure. After each exposure, the thickness of the carbon layer and the roughness of the film were determined by hard x-ray reflectivity measurements. It was observed that most of the carbon film got removed in first 15 minutes exposure. A high density layer formed on top of the Mo film was also observedmore » and it was noted that this layer cannot be removed by successive exposures at different powers. A significant improvement in interface roughness with a slight improvement in top film roughness was observed. The surface roughness of the exposed and unexposed samples was also confirmed by atomic force microscopy measurements.« less

  18. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  19. Towards a more detailed representation of high-latitude vegetation in the global land surface model ORCHIDEE (ORC-HL-VEGv1.0)

    NASA Astrophysics Data System (ADS)

    Druel, Arsène; Peylin, Philippe; Krinner, Gerhard; Ciais, Philippe; Viovy, Nicolas; Peregon, Anna; Bastrikov, Vladislav; Kosykh, Natalya; Mironycheva-Tokareva, Nina

    2017-12-01

    Simulation of vegetation-climate feedbacks in high latitudes in the ORCHIDEE land surface model was improved by the addition of three new circumpolar plant functional types (PFTs), namely non-vascular plants representing bryophytes and lichens, Arctic shrubs and Arctic C3 grasses. Non-vascular plants are assigned no stomatal conductance, very shallow roots, and can desiccate during dry episodes and become active again during wet periods, which gives them a larger phenological plasticity (i.e. adaptability and resilience to severe climatic constraints) compared to grasses and shrubs. Shrubs have a specific carbon allocation scheme, and differ from trees by their larger survival rates in winter, due to protection by snow. Arctic C3 grasses have the same equations as in the original ORCHIDEE version, but different parameter values, optimised from in situ observations of biomass and net primary productivity (NPP) in Siberia. In situ observations of living biomass and productivity from Siberia were used to calibrate the parameters of the new PFTs using a Bayesian optimisation procedure. With the new PFTs, we obtain a lower NPP by 31 % (from 55° N), as well as a lower roughness length (-41 %), transpiration (-33 %) and a higher winter albedo (by +3.6 %) due to increased snow cover. A simulation of the water balance and runoff and drainage in the high northern latitudes using the new PFTs results in an increase of fresh water discharge in the Arctic ocean by 11 % (+140 km3 yr-1), owing to less evapotranspiration. Future developments should focus on the competition between these three PFTs and boreal tree PFTs, in order to simulate their area changes in response to climate change, and the effect of carbon-nitrogen interactions.

  20. Wet-chemical passivation of atomically flat and structured silicon substrates for solar cell application

    NASA Astrophysics Data System (ADS)

    Angermann, H.; Rappich, J.; Korte, L.; Sieber, I.; Conrad, E.; Schmidt, M.; Hübener, K.; Polte, J.; Hauschild, J.

    2008-04-01

    Special sequences of wet-chemical oxidation and etching steps were optimised with respect to the etching behaviour of differently oriented silicon to prepare very smooth silicon interfaces with excellent electronic properties on mono- and poly-crystalline substrates. Surface photovoltage (SPV) and photoluminescence (PL) measurements, atomic force microscopy (AFM) and scanning electron microscopy (SEM) investigations were utilised to develop wet-chemical smoothing procedures for atomically flat and structured surfaces, respectively. Hydrogen-termination as well as passivation by wet-chemical oxides were used to inhibit surface contamination and native oxidation during the technological processing. Compared to conventional pre-treatments, significantly lower micro-roughness and densities of surface states were achieved on mono-crystalline Si(100), on evenly distributed atomic steps, such as on vicinal Si(111), on silicon wafers with randomly distributed upside pyramids, and on poly-crystalline EFG ( Edge-defined Film-fed- Growth) silicon substrates. The recombination loss at a-Si:H/c-Si interfaces prepared on c-Si substrates with randomly distributed upside pyramids was markedly reduced by an optimised wet-chemical smoothing procedure, as determined by PL measurements. For amorphous-crystalline hetero-junction solar cells (ZnO/a-Si:H(n)/c-Si(p)/Al) with textured c-Si substrates the smoothening procedure results in a significant increase of short circuit current Isc, fill factor and efficiency η. The scatter in the cell parameters for measurements on different cells is much narrower, as compared to conventional pre-treatments, indicating more well-defined and reproducible surface conditions prior to a-Si:H emitter deposition and/or a higher stability of the c-Si surface against variations in the a-Si:H deposition conditions.

  1. Determining the Effect of Material Hardness During the Hard Turning of AISI4340 Steel

    NASA Astrophysics Data System (ADS)

    Kambagowni, Venkatasubbaiah; Chitla, Raju; Challa, Suresh

    2018-05-01

    In the present manufacturing industries hardened steels are most widely used in the applications like tool design and mould design. It enhances the application range of hard turning of hardened steels in manufacturing industries. This study discusses the impact of workpiece hardness, feed and depth of cut on Arithmetic mean roughness (Ra), root mean square roughness (Rq), mean depth of roughness (Rz) and total roughness (Rt) during the hard turning. Experiments have been planned according to the Box-Behnken design and conducted on hardened AISI4340 steel at 45, 50 and 55 HRC with wiper ceramic cutting inserts. Cutting speed is kept constant during this study. The analysis of variance was used to determine the effects of the machining parameters. 3-D response surface plots drawn based on RSM were utilized to set up the input-output relationships. The results indicated that the feed rate has the most significant parameter for Ra, Rq and Rz and hardness has the most critical parameter for the Rt. Further, hardness shows its influence over all the surface roughness characteristics.

  2. Variation in bed level shear stress on surfaces sheltered by nonerodible roughness elements

    NASA Astrophysics Data System (ADS)

    Sutton, Stephen L. F.; McKenna-Neuman, Cheryl

    2008-09-01

    Direct bed level observations of surface shear stress, pressure gradient variability, turbulence intensity, and fluid flow patterns were carried out in the vicinity of cylindrical roughness elements mounted in a boundary layer wind tunnel. Paired corkscrew vortices shed from each of the elements result in elevated shear stress and increased potential for the initiation of particle transport within the far wake. While the size and shape of these trailing vortices change with the element spacing, they persist even for large roughness densities. Wake interference coincides with the impingement of the upwind horseshoe vortices upon one another at a point when their diameter approaches half the distance between the roughness elements. While the erosive capability of the horseshoe vortex has been suggested for a variety of settings, the present study shows that the fluid stress immediately beneath this coherent structure is actually small in comparison to that caused by compression of the incident flow as it is deflected around the element and attached vortex. Observations such as these are required for further refinement of models of stress partitioning on rough surfaces.

  3. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  5. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  6. Mars radar clutter and surface roughness characteristics from MARSIS data

    NASA Astrophysics Data System (ADS)

    Campbell, Bruce A.; Schroeder, Dustin M.; Whitten, Jennifer L.

    2018-01-01

    Radar sounder studies of icy, sedimentary, and volcanic settings can be affected by reflections from surface topography surrounding the sensor nadir location. These off-nadir ;clutter; returns appear at similar time delays to subsurface echoes and complicate geologic interpretation. Additionally, broadening of the radar echo in delay by surface returns sets a limit on the detectability of subsurface interfaces. We use MARSIS 4 MHz data to study variations in the nadir and off-nadir clutter echoes, from about 300 km to 1000 km altitude, R, for a wide range of surface roughness. This analysis uses a new method of characterizing ionospheric attenuation to merge observations over a range of solar zenith angle and date. Mirror-like reflections should scale as R-2, but the observed 4 MHz nadir echoes often decline by a somewhat smaller power-law factor because MARSIS on-board processing increases the number of summed pulses with altitude. Prior predictions of the contributions from clutter suggest a steeper decline with R than the nadir echoes, but in very rough areas the ratio of off-nadir returns to nadir echoes shows instead an increase of about R1/2 with altitude. This is likely due in part to an increase in backscatter from the surface as the radar incidence angle at some round-trip time delay declines with increasing R. It is possible that nadir and clutter echo properties in other planetary sounding observations, including RIME and REASON flyby data for Europa, will vary in the same way with altitude, but there may be differences in the nature and scale of target roughness (e.g., icy versus rocky surfaces). We present global maps of the ionosphere- and altitude-corrected nadir echo strength, and of a ;clutter; parameter based on the ratio of off-nadir to nadir echoes. The clutter map offers a view of surface roughness at ∼75 m length scale, bridging the spatial-scale gap between SHARAD roughness estimates and MOLA-derived parameters.

  7. Examination of Routine Practice Patterns in the Hospital Information Data Warehouse: Use of OLAP and Rough Set Analysis with Clinician Feedback

    PubMed Central

    Grant, Andrew; Grant, Gwyneth; Gagné, Jean; Blanchette, Carl; Comeau, Émilie; Brodeur, Guillaume; Dionne, Jonathon; Ayite, Alphonse; Synak, Piotr; Wroblewski, Jakub; Apanowitz, Cas

    2001-01-01

    The patient centred electronic patient record enables retrospective analysis of practice patterns as one means to assist clinicians adjust and improve their practice. An interrogation of the data-warehouse linking test use to Diagnostic Related Group (DRG) of one years data of the Sherbrooke University Hospital showed that one-third of patients used two-thirds of these diagnostic tests. Using RoughSets analysis, zones of repeated tests were demonstrated where results remained within stable limits. It was concluded that 30% of fluid and electrolyte testing was probably unnecessary. These findings led to an endorsement of changing the test request formats in the hospital information system from profiles to individual tests requiring justification.

  8. Stochastic optimisation of water allocation on a global scale

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.

    2014-05-01

    Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.

  9. Laboratory evaluation of an optimised internet-based speech-in-noise test for occupational high-frequency hearing loss screening: Occupational Earcheck.

    PubMed

    Sheikh Rashid, Marya; Leensen, Monique C J; de Laat, Jan A P M; Dreschler, Wouter A

    2017-11-01

    The "Occupational Earcheck" (OEC) is a Dutch online self-screening speech-in-noise test developed for the detection of occupational high-frequency hearing loss (HFHL). This study evaluates an optimised version of the test and determines the most appropriate masking noise. The original OEC was improved by homogenisation of the speech material, and shortening the test. A laboratory-based cross-sectional study was performed in which the optimised OEC in five alternative masking noise conditions was evaluated. The study was conducted on 18 normal-hearing (NH) adults, and 15 middle-aged listeners with HFHL. The OEC in a low-pass (LP) filtered stationary background noise (test version LP 3: with a cut-off frequency of 1.6 kHz, and a noise floor of -12 dB) was the most accurate version tested. The test showed a reasonable sensitivity (93%), and specificity (94%) and test reliability (intra-class correlation coefficient: 0.84, mean within-subject standard deviation: 1.5 dB SNR, slope of psychometric function: 13.1%/dB SNR). The improved OEC, with homogenous word material in a LP filtered noise, appears to be suitable for the discrimination between younger NH listeners and older listeners with HFHL. The appropriateness of the OEC for screening purposes in an occupational setting will be studied further.

  10. Defining the Role of the Pharmacy Technician and Identifying Their Future Role in Medicines Optimisation

    PubMed Central

    Boughen, Melanie; Sutton, Jane; Fenn, Tess

    2017-01-01

    Background: Traditionally, pharmacy technicians have worked alongside pharmacists in community and hospital pharmacy. Changes within pharmacy provide opportunity for role expansion and with no apparent career pathway, there is a need to define the current pharmacy technician role and role in medicines optimisation. Aim: To capture the current roles of pharmacy technicians and identify how their future role will contribute to medicines optimisation. Methods: Following ethical approval and piloting, an online survey to ascertain pharmacy technicians’ views about their roles was undertaken. Recruitment took place in collaboration with the Association of Pharmacy Technicians UK. Data were exported to SPSS, data screened and descriptive statistics produced. Free text responses were analysed and tasks collated into categories reflecting the type of work involved in each task. Results: Responses received were 393 (28%, n = 1380). Results were organised into five groups: i.e., hospital, community, primary care, General Practitioner (GP) practice and other (which included HM Prison Service). Thirty tasks were reported as commonly undertaken in three or more settings and 206 (84.7%, n = 243) pharmacy technicians reported they would like to expand their role. Conclusions: Tasks core to hospital and community pharmacy should be considered for inclusion to initial education standards to reflect current practice. Post qualification, pharmacy technicians indicate a significant desire to expand clinically and managerially allowing pharmacists more time in patient-facing/clinical roles. PMID:28970452

  11. Defining the Role of the Pharmacy Technician and Identifying Their Future Role in Medicines Optimisation.

    PubMed

    Boughen, Melanie; Sutton, Jane; Fenn, Tess; Wright, David

    2017-07-15

    Traditionally, pharmacy technicians have worked alongside pharmacists in community and hospital pharmacy. Changes within pharmacy provide opportunity for role expansion and with no apparent career pathway, there is a need to define the current pharmacy technician role and role in medicines optimisation. To capture the current roles of pharmacy technicians and identify how their future role will contribute to medicines optimisation. Following ethical approval and piloting, an online survey to ascertain pharmacy technicians' views about their roles was undertaken. Recruitment took place in collaboration with the Association of Pharmacy Technicians UK. Data were exported to SPSS, data screened and descriptive statistics produced. Free text responses were analysed and tasks collated into categories reflecting the type of work involved in each task. Responses received were 393 (28%, n = 1380). Results were organised into five groups: i.e., hospital, community, primary care, General Practitioner (GP) practice and other (which included HM Prison Service). Thirty tasks were reported as commonly undertaken in three or more settings and 206 (84.7%, n = 243) pharmacy technicians reported they would like to expand their role. Tasks core to hospital and community pharmacy should be considered for inclusion to initial education standards to reflect current practice. Post qualification, pharmacy technicians indicate a significant desire to expand clinically and managerially allowing pharmacists more time in patient-facing/clinical roles.

  12. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  13. [Evaluation of the animal-assisted therapy in Alzheimer's disease].

    PubMed

    Quibel, Clémence; Bonin, Marie; Bonnet, Magalie; Gaimard, Maryse; Mourey, France; Moesch, Isabelle; Ancet, Pierre

    Animal-assisted therapy sessions have been set up in a protected unit for patients with a dementia-related syndrome. The aim is to measure the effects of animal-assisted therapy on behavioural disorders in daily life and care. The results obtained provided some interesting areas to explore and recommendations with a view to optimising the implementation of such a system. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  14. Optimising molecular diagnostic capacity for effective control of tuberculosis in high-burden settings.

    PubMed

    Sabiiti, W; Mtafya, B; Kuchaka, D; Azam, K; Viegas, S; Mdolo, A; Farmer, E C W; Khonga, M; Evangelopoulos, D; Honeyborne, I; Rachow, A; Heinrich, N; Ntinginya, N E; Bhatt, N; Davies, G R; Jani, I V; McHugh, T D; Kibiki, G; Hoelscher, M; Gillespie, S H

    2016-08-01

    The World Health Organization's 2035 vision is to reduce tuberculosis (TB) associated mortality by 95%. While low-burden, well-equipped industrialised economies can expect to see this goal achieved, it is challenging in the low- and middle-income countries that bear the highest burden of TB. Inadequate diagnosis leads to inappropriate treatment and poor clinical outcomes. The roll-out of the Xpert(®) MTB/RIF assay has demonstrated that molecular diagnostics can produce rapid diagnosis and treatment initiation. Strong molecular services are still limited to regional or national centres. The delay in implementation is due partly to resources, and partly to the suggestion that such techniques are too challenging for widespread implementation. We have successfully implemented a molecular tool for rapid monitoring of patient treatment response to anti-tuberculosis treatment in three high TB burden countries in Africa. We discuss here the challenges facing TB diagnosis and treatment monitoring, and draw from our experience in establishing molecular treatment monitoring platforms to provide practical insights into successful optimisation of molecular diagnostic capacity in resource-constrained, high TB burden settings. We recommend a holistic health system-wide approach for molecular diagnostic capacity development, addressing human resource training, institutional capacity development, streamlined procurement systems, and engagement with the public, policy makers and implementers of TB control programmes.

  15. Optimisation of spray drying process conditions for sugar nanoporous microparticles (NPMPs) intended for inhalation.

    PubMed

    Amaro, Maria Inês; Tajber, Lidia; Corrigan, Owen I; Healy, Anne Marie

    2011-12-12

    The present study investigated the effect of operating parameters of a laboratory spray dryer on powder characteristics, in order to optimise the production of trehalose and raffinose powders, intended to be used as carriers of biomolecules for inhalation. The sugars were spray dried from 80:20 methanol:n-butyl acetate (v/v) solutions using a Büchi Mini Spray dryer B-290. A 2(4) factorial design of experiment (DOE) was undertaken. Process parameters studied were inlet temperature, gas flow rate, feed solution flow rate (pump setting) and feed concentration. Resulting powders where characterised in terms of yield, particle size (PS), residual solvent content (RSC) and outlet temperature. An additional outcome evaluated was the specific surface area (SSA) (by BET gas adsorption), and a relation between SSA and the in vitro deposition of the sugar NPMPs powders was also investigated. The DOE resulted in well fitted models. The most significant factors affecting the characteristics of the NPMPs prepared, at a 95% confidence interval, were gas flow: yield, PS and SSA; pump setting: yield; inlet temperature: RSC. Raffinose NPMPs presented better characteristics than trehalose NPMPs in terms of their use for inhalation, since particles with larger surface area resulting in higher fine particle fraction can be produced. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Fast Spatial Resolution Analysis of Quadratic Penalized Least-Squares Image Reconstruction With Separate Real and Imaginary Roughness Penalty: Application to fMRI.

    PubMed

    Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A

    2018-02-01

    Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.

  17. Investigation of roughing machining simulation by using visual basic programming in NX CAM system

    NASA Astrophysics Data System (ADS)

    Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed

    2018-03-01

    This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.

  18. Optimisation of logistics processes of energy grass collection

    NASA Astrophysics Data System (ADS)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu

  19. Bed roughness of palaeo-ice streams: insights and implications for contemporary ice sheet dynamics

    NASA Astrophysics Data System (ADS)

    Falcini, Francesca; Rippin, David; Selby, Katherine; Krabbendam, Maarten

    2017-04-01

    Bed roughness is the vertical variation of elevation along a horizontal transect. It is an important control on ice stream location and dynamics, with a correspondingly important role in determining the behaviour of ice sheets. Previous studies of bed roughness have been limited to insights derived from Radio Echo Sounding (RES) profiles across parts of Antarctica and Greenland. Such an approach has been necessary due to the inaccessibility of the underlying bed. This approach has led to important insights, such as identifying a general link between smooth beds and fast ice flow, as well as rough beds and slow ice flow. However, these insights are mainly derived from relatively coarse datasets, so that links between roughness and flow are generalised and rather simplistic. Here, we explore the use of DTMs from the well-preserved footprints of palaeo-ice streams, coupled with high resolution models of palaeo-ice flow, as a tool for investigating basal controls on the behaviour of contemporary, active ice streams in much greater detail. Initially, artificial transects were set up across the Minch palaeo-ice stream (NW Scotland) to mimic RES flight lines from past studies in Antarctica. We then explored how increasing data-resolution impacted upon the roughness measurements that were derived. Our work on the Minch palaeo-ice stream indicates that different roughness signatures are associated with different glacial landforms, and we discuss the potential for using these insights to infer, from RES-based roughness measurements, the occurrence of particular landform assemblages that may exist beneath contemporary ice sheets.

  20. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  1. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  2. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  3. A CONCEPTUAL FRAMEWORK FOR MANAGING RADIATION DOSE TO PATIENTS IN DIAGNOSTIC RADIOLOGY USING REFERENCE DOSE LEVELS.

    PubMed

    Almén, Anja; Båth, Magnus

    2016-06-01

    The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Basis for the development of sustainable optimisation indicators for activated sludge wastewater treatment plants in the Republic of Ireland.

    PubMed

    Gordon, G T; McCann, B P

    2015-01-01

    This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.

  5. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    NASA Astrophysics Data System (ADS)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.

  6. Development and optimisation of atorvastatin calcium loaded self-nanoemulsifying drug delivery system (SNEDDS) for enhancing oral bioavailability: in vitro and in vivo evaluation.

    PubMed

    Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M

    2017-05-01

    The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.

  7. Selection of an evaluation index for water ecological civilizations of water-shortage cities based on the grey rough set

    NASA Astrophysics Data System (ADS)

    Zhang, X. Y.; Zhu, J. W.; Xie, J. C.; Liu, J. L.; Jiang, R. G.

    2017-08-01

    According to the characteristics and existing problems of water ecological civilization of water-shortage cities, the evaluation index system of water ecological civilization was established using a grey rough set. From six aspects of water resources, water security, water environment, water ecology, water culture and water management, this study established the prime frame of the evaluation system, including 28 items, and used rough set theory to undertake optimal selection of the index system. Grey correlation theory then was used for weightings in order that the integrated evaluation index system for water ecology civilization of water-shortage cities could be constituted. Xi’an City was taken as an example, for which the results showed that 20 evaluation indexes could be obtained after optimal selection of the preliminary framework of evaluation index. The most influential indices were the water-resource category index and water environment category index. The leakage rate of the public water supply pipe network, as well as the disposal, treatment and usage rate of polluted water, urban water surface area ratio, the water quality of the main rivers, and so on also are important. It was demonstrated that the evaluation index could provide an objectively reflection of regional features and key points for the development of water ecology civilization for cities with scarce water resources. It is considered that the application example has universal applicability.

  8. Prediction of protein interaction hot spots using rough set-based multiple criteria linear programming.

    PubMed

    Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong

    2011-01-21

    Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    NASA Astrophysics Data System (ADS)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  10. Definitions and Omissions of Heroism

    ERIC Educational Resources Information Center

    Martens, Jeffrey W.

    2005-01-01

    This article presents comments on "The Heroism of Women and Men" by Selwyn W. Becker and Alice H. Eagly. Their article specifically addressed the "cultural association of heroism with men and masculinity . . . in natural settings." Becker and Eagly evidenced roughly equivalent rates of heroism by women and men in a variety of settings. However,…

  11. Intrusion detection using rough set classification.

    PubMed

    Zhang, Lian-hua; Zhang, Guan-hua; Zhang, Jie; Bai, Ying-cai

    2004-09-01

    Recently machine learning-based intrusion detection approaches have been subjected to extensive researches because they can detect both misuse and anomaly. In this paper, rough set classification (RSC), a modern learning algorithm, is used to rank the features extracted for detecting intrusions and generate intrusion detection models. Feature ranking is a very critical step when building the model. RSC performs feature ranking before generating rules, and converts the feature ranking to minimal hitting set problem addressed by using genetic algorithm (GA). This is done in classical approaches using Support Vector Machine (SVM) by executing many iterations, each of which removes one useless feature. Compared with those methods, our method can avoid many iterations. In addition, a hybrid genetic algorithm is proposed to increase the convergence speed and decrease the training time of RSC. The models generated by RSC take the form of "IF-THEN" rules, which have the advantage of explication. Tests and comparison of RSC with SVM on DARPA benchmark data showed that for Probe and DoS attacks both RSC and SVM yielded highly accurate results (greater than 99% accuracy on testing set).

  12. Small Satellite Formations for Distributed Surveillance: System Design and Optimal Control Considerations (Formations de petits satellites pour une surveillance distribuee: Considerations relatives la conception de systeme et a l’optimisation de commandes)

    DTIC Science & Technology

    2009-04-01

    System Analysis and Studies Panel • SCI Systems Concepts and Integration Panel • SET Sensors and Electronics Technology Panel These bodies are... Industriales GRUPO M.TORRES Ctra. Pamplona-Huesca, Km.9 31119 Torres de Elorz (Navarra) Spain +34 948 317 811 +34 948 317 952 irene.eguinoa@mtorres.es

  13. Structure activity studies of an analgesic drug tapentadol hydrochloride by spectroscopic and quantum chemical methods

    NASA Astrophysics Data System (ADS)

    Arjunan, V.; Santhanam, R.; Marchewka, M. K.; Mohan, S.; Yang, Haifeng

    2015-11-01

    Tapentadol is a novel opioid pain reliever drug with a dual mechanism of action, having potency between morphine and tramadol. Quantum chemical calculations have been carried out for tapentadol hydrochloride (TAP.Cl) to determine the properties. The geometry is optimised and the structural properties of the compound were determined from the optimised geometry by B3LYP method using 6-311++G(d,p), 6-31G(d,p) and cc-pVDZ basis sets. FT-IR and FT-Raman spectra are recorded in the solid phase in the region of 4000-400 and 4000-100 cm-1, respectively. Frontier molecular orbital energies, LUMO-HOMO energy gap, ionisation potential, electron affinity, electronegativity, hardness and chemical potential are also calculated. The stability of the molecule arising from hyperconjugative interactions and charge delocalisation has been analysed using NBO analysis. The 1H and 13C nuclear magnetic resonance chemical shifts of the molecule are analysed.

  14. Novel Approach on the Optimisation of Mid-Course Corrections Along Interplanetary Trajectories

    NASA Astrophysics Data System (ADS)

    Iorfida, Elisabetta; Palmer, Phil; Roberts, Mark

    The primer vector theory, firstly proposed by Lawden, defines a set of necessary conditions to characterise whether an impulsive thrust trajectory is optimal with respect to propellant usage, within a two-body problem context. If the conditions are not satisfied, one or more potential intermediate impulses are performed along the transfer arc, in order to lower the overall cost. The method is based on the propagation of the state transition matrix and on the solution of a boundary value problem, which leads to a mathematical and computational complexity.In this paper, a different approach is introduced. It is based on a polar coordinates transformation of the primer vector which allows the decoupling between its in-plane and out-of-plane components. The out-of-plane component is solved analytically while for the in-plane ones a Hamiltonian approximation is made.The novel procedure reduces the mathematical complexity and the computational cost of Lawden's problem and gives also a different perspective about the optimisation of a transfer trajectory.

  15. Optimising Ambient Setting Bayer Derived Fly Ash Geopolymers

    PubMed Central

    Jamieson, Evan; Kealley, Catherine S.; van Riessen, Arie; Hart, Robert D.

    2016-01-01

    The Bayer process utilises high concentrations of caustic and elevated temperature to liberate alumina from bauxite, for the production of aluminium and other chemicals. Within Australia, this process results in 40 million tonnes of mineral residues (Red mud) each year. Over the same period, the energy production sector will produce 14 million tonnes of coal combustion products (Fly ash). Both industrial residues require impoundment storage, yet combining some of these components can produce geopolymers, an alternative to cement. Geopolymers derived from Bayer liquor and fly ash have been made successfully with a compressive strength in excess of 40 MPa after oven curing. However, any product from these industries would require large volume applications with robust operational conditions to maximise utilisation. To facilitate potential unconfined large-scale production, Bayer derived fly ash geopolymers have been optimised to achieve ambient curing. Fly ash from two different power stations have been successfully trialled showing the versatility of the Bayer liquor-ash combination for making geopolymers. PMID:28773513

  16. Optimising Ambient Setting Bayer Derived Fly Ash Geopolymers.

    PubMed

    Jamieson, Evan; Kealley, Catherine S; van Riessen, Arie; Hart, Robert D

    2016-05-19

    The Bayer process utilises high concentrations of caustic and elevated temperature to liberate alumina from bauxite, for the production of aluminium and other chemicals. Within Australia, this process results in 40 million tonnes of mineral residues (Red mud) each year. Over the same period, the energy production sector will produce 14 million tonnes of coal combustion products (Fly ash). Both industrial residues require impoundment storage, yet combining some of these components can produce geopolymers, an alternative to cement. Geopolymers derived from Bayer liquor and fly ash have been made successfully with a compressive strength in excess of 40 MPa after oven curing. However, any product from these industries would require large volume applications with robust operational conditions to maximise utilisation. To facilitate potential unconfined large-scale production, Bayer derived fly ash geopolymers have been optimised to achieve ambient curing. Fly ash from two different power stations have been successfully trialled showing the versatility of the Bayer liquor-ash combination for making geopolymers.

  17. Load optimised piezoelectric generator for powering battery-less TPMS

    NASA Astrophysics Data System (ADS)

    Blažević, D.; Kamenar, E.; Zelenika, S.

    2013-05-01

    The design of a piezoelectric device aimed at harvesting the kinetic energy of random vibrations on a vehicle's wheel is presented. The harvester is optimised for powering a Tire Pressure Monitoring System (TPMS). On-road experiments are performed in order to measure the frequencies and amplitudes of wheels' vibrations. It is hence determined that the highest amplitudes occur in an unperiodic manner. Initial tests of the battery-less TPMS are performed in laboratory conditions where tuning and system set-up optimization is achieved. The energy obtained from the piezoelectric bimorph is managed by employing the control electronics which converts AC voltage to DC and conditions the output voltage to make it compatible with the load (i.e. sensor electronics and transmitter). The control electronics also manages the sleep/measure/transmit cycles so that the harvested energy is efficiently used. The system is finally tested in real on-road conditions successfully powering the pressure sensor and transmitting the data to a receiver in the car cockpit.

  18. The faint intergalactic-medium red-shifted emission balloon: future UV observations with EMCCDs

    NASA Astrophysics Data System (ADS)

    Kyne, Gillian; Hamden, Erika T.; Lingner, Nicole; Morrissey, Patrick; Nikzad, Shouleh; Martin, D. Christopher

    2016-08-01

    We present the latest developments in our joint NASA/CNES suborbital project. This project is a balloon-borne UV multi-object spectrograph, which has been designed to detect faint emission from the circumgalactic medium (CGM) around low redshift galaxies. One major change from FIREBall-1 has been the use of a delta-doped Electron Multiplying CCD (EMCCD). EMCCDs can be used in photon-counting (PC) mode to achieve extremely low readout noise (¡ 1e-). Our testing initially focused on reducing clock-induced-charge (CIC) through wave shaping and well depth optimisation with the CCD Controller for Counting Photons (CCCP) from Nüvü. This optimisation also includes methods for reducing dark current, via cooling and substrate voltage adjustment. We present result of laboratory noise measurements including dark current. Furthermore, we will briefly present some initial results from our first set of on-sky observations using a delta-doped EMCCD on the 200 inch telescope at Palomar using the Palomar Cosmic Web Imager (PCWI).

  19. Vibrational spectra and ab initio analysis of tert-butyl, trimethylsilyl, and trimethylgermyl derivatives of 3,3-dimethyl cyclopropene V. 3,3-Dimethyl-1-(trimethylgermyl)cyclopropene

    NASA Astrophysics Data System (ADS)

    De Maré, G. R.; Panchenko, Yu. N.; Abramenkov, A. V.; Baird, M. S.; Tverezovsky, V. V.; Nizovtsev, A. V.; Bolesov, I. G.

    2004-02-01

    3,3-Dimethyl-1-(trimethylgermyl)cyclopropene ( I) was synthesised using a standard procedure. The IR and Raman spectra of I in the liquid phase were measured. The molecular geometry of I was optimised completely at the HF/6-31G* level. The HF/6-31G*//HF/6-31G* force field was calculated and scaled using the set of scale factors transferred from those determined previously for scaling the theoretical force fields of 3,3-dimethylbutene-1 and 1-methyl-, 1,2-dimethyl-, and 3,3-dimethylcyclopropene. The assignments of the observed vibrational bands were performed using the theoretical frequencies calculated from the scaled HF/6-31G*//HF/6-31G* force field and the ab initio values of the IR intensities, Raman cross-sections and depolarisation ratios. The theoretical spectra are given. The completely optimised structural parameters of I and its vibrational frequencies are compared with corresponding data of related molecules.

  20. Synthesis, vibrational, NMR, quantum chemical and structure-activity relation studies of 2-hydroxy-4-methoxyacetophenone.

    PubMed

    Arjunan, V; Devi, L; Subbalakshmi, R; Rani, T; Mohan, S

    2014-09-15

    The stable geometry of 2-hydroxy-4-methoxyacetophenone is optimised by DFT/B3LYP method with 6-311++G(∗∗) and cc-pVTZ basis sets. The structural parameters, thermodynamic properties and vibrational frequencies of the optimised geometry have been determined. The effects of substituents (hydroxyl, methoxy and acetyl groups) on the benzene ring vibrational frequencies are analysed. The vibrational frequencies of the fundamental modes of 2-hydroxy-4-methoxyacetophenone have been precisely assigned and analysed and the theoretical results are compared with the experimental vibrations. 1H and 13C NMR isotropic chemical shifts are calculated and assignments made are compared with the experimental values. The energies of important MO's, the total electron density and electrostatic potential of the compound are determined. Various reactivity and selectivity descriptors such as chemical hardness, chemical potential, softness, electrophilicity, nucleophilicity and the appropriate local quantities are calculated. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Rough Set Theory based prognostication of life expectancy for terminally ill patients.

    PubMed

    Gil-Herrera, Eleazar; Yalcin, Ali; Tsalatsanis, Athanasios; Barnes, Laura E; Djulbegovic, Benjamin

    2011-01-01

    We present a novel knowledge discovery methodology that relies on Rough Set Theory to predict the life expectancy of terminally ill patients in an effort to improve the hospice referral process. Life expectancy prognostication is particularly valuable for terminally ill patients since it enables them and their families to initiate end-of-life discussions and choose the most desired management strategy for the remainder of their lives. We utilize retrospective data from 9105 patients to demonstrate the design and implementation details of a series of classifiers developed to identify potential hospice candidates. Preliminary results confirm the efficacy of the proposed methodology. We envision our work as a part of a comprehensive decision support system designed to assist terminally ill patients in making end-of-life care decisions.

  2. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    NASA Astrophysics Data System (ADS)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  3. Allowable SEM noise for unbiased LER measurement

    NASA Astrophysics Data System (ADS)

    Papavieros, George; Constantoudis, Vassilios; Gogolides, Evangelos

    2018-03-01

    Recently, a novel method for the calculation of unbiased Line Edge Roughness based on Power Spectral Density analysis has been proposed. In this paper first an alternative method is discussed and investigated, utilizing the Height-Height Correlation Function (HHCF) of edges. The HHCF-based method enables the unbiased determination of the whole triplet of LER parameters including besides rms the correlation length and roughness exponent. The key of both methods is the sensitivity of PSD and HHCF on noise at high frequencies and short distance respectively. Secondly, we elaborate a testbed of synthesized SEM images with controlled LER and noise to justify the effectiveness of the proposed unbiased methods. Our main objective is to find out the boundaries of the method in respect to noise levels and roughness characteristics, for which the method remains reliable, i.e the maximum amount of noise allowed, for which the output results cope with the controllable known inputs. At the same time, we will also set the extremes of roughness parameters for which the methods hold their accuracy.

  4. Set-membership fault detection under noisy environment with application to the detection of abnormal aircraft control surface positions

    NASA Astrophysics Data System (ADS)

    El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali

    2015-09-01

    The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.

  5. Effects of cutting parameters and machining environments on surface roughness in hard turning using design of experiment

    NASA Astrophysics Data System (ADS)

    Mia, Mozammel; Bashir, Mahmood Al; Dhar, Nikhil Ranjan

    2016-07-01

    Hard turning is gradually replacing the time consuming conventional turning process, which is typically followed by grinding, by producing surface quality compatible to grinding. The hard turned surface roughness depends on the cutting parameters, machining environments and tool insert configurations. In this article the variation of the surface roughness of the produced surfaces with the changes in tool insert configuration, use of coolant and different cutting parameters (cutting speed, feed rate) has been investigated. This investigation was performed in machining AISI 1060 steel, hardened to 56 HRC by heat treatment, using coated carbide inserts under two different machining environments. The depth of cut, fluid pressure and material hardness were kept constant. The Design of Experiment (DOE) was performed to determine the number and combination sets of different cutting parameters. A full factorial analysis has been performed to examine the effect of main factors as well as interaction effect of factors on surface roughness. A statistical analysis of variance (ANOVA) was employed to determine the combined effect of cutting parameters, environment and tool configuration. The result of this analysis reveals that environment has the most significant impact on surface roughness followed by feed rate and tool configuration respectively.

  6. Effects of bio-inspired microscale roughness on macroscale flow structures

    NASA Astrophysics Data System (ADS)

    Bocanegra Evans, Humberto; Hamed, Ali M.; Gorumlu, Serdar; Doosttalab, Ali; Aksak, Burak; Chamorro, Leonardo P.; Castillo, Luciano

    2016-11-01

    The interaction between rough surfaces and flows is a complex physical situation that produces rich flow phenomena. While random roughness typically increases drag, properly engineered roughness patterns may produce positive results, e.g. dimples in a golf ball. Here we present a set of PIV measurements in an index matched facility of the effect of a bio-inspired surface that consists of an array of mushroom-shaped micro-pillars. The experiments are carried out-under fully wetted conditions-in a flow with adverse pressure gradient, triggering flow separation. The introduction of the micro-pillars dramatically decreases the size of the recirculation bubble; the area with backflow is reduced by approximately 60%. This suggests a positive impact on the form drag generated by the fluid. Furthermore, a negligible effect is seen on the turbulence production terms. The micro-pillars affect the flow by generating low and high pressure perturbations at the interface between the bulk and roughness layer, in a fashion comparable to that of synthetic jets. The passive approach, however, facilitates the implementation of this coating. As the mechanism does not rely on surface hydrophobicity, it is well suited for underwater applications and its functionality should not degrade over time.

  7. Assessment of Ice Shape Roughness Using a Self-Orgainizing Map Approach

    NASA Technical Reports Server (NTRS)

    Mcclain, Stephen T.; Kreeger, Richard E.

    2013-01-01

    Self-organizing maps are neural-network techniques for representing noisy, multidimensional data aligned along a lower-dimensional and nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. Prior investigations of ice shapes have focused on using self-organizing maps to characterize mean ice forms. The Icing Research Branch has recently acquired a high resolution three dimensional scanner system capable of resolving ice shape surface roughness. A method is presented for the evaluation of surface roughness variations using high-resolution surface scans based on a self-organizing map representation of the mean ice shape. The new method is demonstrated for 1) an 18-in. NACA 23012 airfoil 2 AOA just after the initial ice coverage of the leading 5 of the suction surface of the airfoil, 2) a 21-in. NACA 0012 at 0AOA following coverage of the leading 10 of the airfoil surface, and 3) a cold-soaked 21-in.NACA 0012 airfoil without ice. The SOM method resulted in descriptions of the statistical coverage limits and a quantitative representation of early stages of ice roughness formation on the airfoils. Limitations of the SOM method are explored, and the uncertainty limits of the method are investigated using the non-iced NACA 0012 airfoil measurements.

  8. Economic impact of optimising antiretroviral treatment in human immunodeficiency virus-infected adults with suppressed viral load in Spain, by implementing the grade A-1 evidence recommendations of the 2015 GESIDA/National AIDS Plan.

    PubMed

    Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz

    2018-03-01

    The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  9. A general structure-property relationship to predict the enthalpy of vaporisation at ambient temperatures.

    PubMed

    Oberg, T

    2007-01-01

    The vapour pressure is the most important property of an anthropogenic organic compound in determining its partitioning between the atmosphere and the other environmental media. The enthalpy of vaporisation quantifies the temperature dependence of the vapour pressure and its value around 298 K is needed for environmental modelling. The enthalpy of vaporisation can be determined by different experimental methods, but estimation methods are needed to extend the current database and several approaches are available from the literature. However, these methods have limitations, such as a need for other experimental results as input data, a limited applicability domain, a lack of domain definition, and a lack of predictive validation. Here we have attempted to develop a quantitative structure-property relationship (QSPR) that has general applicability and is thoroughly validated. Enthalpies of vaporisation at 298 K were collected from the literature for 1835 pure compounds. The three-dimensional (3D) structures were optimised and each compound was described by a set of computationally derived descriptors. The compounds were randomly assigned into a calibration set and a prediction set. Partial least squares regression (PLSR) was used to estimate a low-dimensional QSPR model with 12 latent variables. The predictive performance of this model, within the domain of application, was estimated at n=560, q2Ext=0.968 and s=0.028 (log transformed values). The QSPR model was subsequently applied to a database of 100,000+ structures, after a similar 3D optimisation and descriptor generation. Reliable predictions can be reported for compounds within the previously defined applicability domain.

  10. Parameterized Spectral Bathymetric Roughness Using the Nonequispaced Fast Fourier Transform

    NASA Astrophysics Data System (ADS)

    Fabre, David Hanks

    The ocean and acoustic modeling community has specifically asked for roughness from bathymetry. An effort has been undertaken to provide what can be thought of as the high frequency content of bathymetry. By contrast, the low frequency content of bathymetry is the set of contours. The two-dimensional amplitude spectrum calculated with the nonequispaced fast Fourier transform (Kunis, 2006) is exploited as the statistic to provide several parameters of roughness following the method of Fox (1996). When an area is uniformly rough, it is termed isotropically rough. When an area exhibits lineation effects (like in a trough or a ridge line in the bathymetry), the term anisotropically rough is used. A predominant spatial azimuth of lineation summarizes anisotropic roughness. The power law model fit produces a roll-off parameter that also provides insight into the roughness of the area. These four parameters give rise to several derived parameters. Algorithmic accomplishments include reviving Fox's method (1985, 1996) and improving the method with the possibly geophysically more appropriate nonequispaced fast Fourier transform. A new composite parameter, simply the overall integral length of the nonlinear parameterizing function, is used to make within-dataset comparisons. A synthetic dataset and six multibeam datasets covering practically all depth regimes have been analyzed with the tools that have been developed. Data specific contributions include possibly discovering an aspect ratio isotropic cutoff level (less than 1.2), showing a range of spectral fall-off values between about -0.5 for a sandybottomed Gulf of Mexico area, to about -1.8 for a coral reef area just outside of the Saipan harbor. We also rank the targeted type of dataset, the best resolution gridded datasets, from smoothest to roughest using a factor based on the kernel dimensions, a percentage from the windowing operation, all multiplied by the overall integration length.

  11. Understanding the Models of Community Hospital rehabilitation Activity (MoCHA): a mixed-methods study

    PubMed Central

    Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen

    2017-01-01

    Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766

  12. Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area

    NASA Astrophysics Data System (ADS)

    Khare, Vikas; Nema, Savita; Baredar, Prashant

    2017-04-01

    This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.

  13. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  14. Accelerated aging effects on surface hardness and roughness of lingual retainer adhesives.

    PubMed

    Ramoglu, Sabri Ilhan; Usumez, Serdar; Buyukyilmaz, Tamer

    2008-01-01

    To test the null hypothesis that accelerated aging has no effect on the surface microhardness and roughness of two light-cured lingual retainer adhesives. Ten samples of light-cured materials, Transbond Lingual Retainer (3M Unitek) and Light Cure Retainer (Reliance) were cured with a halogen light for 40 seconds. Vickers hardness and surface roughness were measured before and after accelerated aging of 300 hours in a weathering tester. Differences between mean values were analyzed for statistical significance using a t-test. The level of statistical significance was set at P < .05. The mean Vickers hardness of Transbond Lingual Retainer was 62.8 +/- 3.5 and 79.6 +/- 4.9 before and after aging, respectively. The mean Vickers hardness of Light Cure Retainer was 40.3 +/- 2.6 and 58.3 +/- 4.3 before and after aging, respectively. Differences in both groups were statistically significant (P < .001). Following aging, mean surface roughness was changed from 0.039 microm to 0.121 microm and from 0.021 microm to 0.031 microm for Transbond Lingual Retainer and Light Cure Retainer, respectively. The roughening of Transbond Lingual Retainer with aging was statistically significant (P < .05), while the change in the surface roughness of Light Cure Retainer was not (P > .05). Accelerated aging significantly increased the surface microhardness of both light-cured retainer adhesives tested. It also significantly increased the surface roughness of the Transbond Lingual Retainer.

  15. Rough case-based reasoning system for continues casting

    NASA Astrophysics Data System (ADS)

    Su, Wenbin; Lei, Zhufeng

    2018-04-01

    The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.

  16. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  17. Multiobjective optimisation of bogie suspension to boost speed on curves

    NASA Astrophysics Data System (ADS)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  18. Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.

    2017-08-01

    We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95  <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase  <200 ms and for changes in the breathing period of  <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.

  19. Retrieval of Soil Moisture and Roughness from the Polarimetric Radar Response

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Ulaby, Fawwaz T.

    1997-01-01

    The main objective of this investigation was the characterization of soil moisture using imaging radars. In order to accomplish this task, a number of intermediate steps had to be undertaken. In this proposal, the theoretical, numerical, and experimental aspects of electromagnetic scattering from natural surfaces was considered with emphasis on remote sensing of soil moisture. In the general case, the microwave backscatter from natural surfaces is mainly influenced by three major factors: (1) the roughness statistics of the soil surface, (2) soil moisture content, and (3) soil surface cover. First the scattering problem from bare-soil surfaces was considered and a hybrid model that relates the radar backscattering coefficient to soil moisture and surface roughness was developed. This model is based on extensive experimental measurements of the radar polarimetric backscatter response of bare soil surfaces at microwave frequencies over a wide range of moisture conditions and roughness scales in conjunction with existing theoretical surface scattering models in limiting cases (small perturbation, physical optics, and geometrical optics models). Also a simple inversion algorithm capable of providing accurate estimates of soil moisture content and surface rms height from single-frequency multi-polarization radar observations was developed. The accuracy of the model and its inversion algorithm is demonstrated using independent data sets. Next the hybrid model for bare-soil surfaces is made fully polarimetric by incorporating the parameters of the co- and cross-polarized phase difference into the model. Experimental data in conjunction with numerical simulations are used to relate the soil moisture content and surface roughness to the phase difference statistics. For this purpose, a novel numerical scattering simulation for inhomogeneous dielectric random surfaces was developed. Finally the scattering problem of short vegetation cover above a rough soil surface was considered. A general scattering model for grass-blades of arbitrary cross section was developed and incorporated in a first order random media model. The vegetation model and the bare-soil model are combined and the accuracy of the combined model is evaluated against experimental observations from a wheat field over the entire growing season. A complete set of ground-truth data and polarimetric backscatter data were collected. Also an inversion algorithm for estimating soil moisture and surface roughness from multi-polarized multi-frequency observations of vegetation-covered ground is developed.

  20. PeTTSy: a computational tool for perturbation analysis of complex systems biology models.

    PubMed

    Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A

    2016-03-10

    Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.

  1. Optimisation of insect cell growth in deep-well blocks: development of a high-throughput insect cell expression screen.

    PubMed

    Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian

    2005-01-01

    This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.

  2. Mutual information-based LPI optimisation for radar network

    NASA Astrophysics Data System (ADS)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  3. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  4. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  5. From smooth to rough, from water to air: the intertidal habitat of Northern clingfish ( Gobiesox maeandricus)

    NASA Astrophysics Data System (ADS)

    Ditsche, Petra; Hicks, Madeline; Truong, Lisa; Linkem, Christina; Summers, Adam

    2017-04-01

    The Northern clingfish is a small, Eastern North Pacific fish that can attach to rough, fouled rocks in the intertidal. Their ability to attach to surfaces has been measured previously in the laboratory, and in this study, we show the roughness and fouling of the natural habitat of these fish. We introduce a new method for measuring surface roughness of natural substrates with time-limited accessibility. We expect this method to be broadly applicable in studies of animal/substrate surface interactions in habitats difficult to characterize. Our roughness measurements demonstrate that the fish's ability to attach to very coarse roughness is required in its natural environment. Some of the rocks showed even coarser roughness than the fish could attach to in the lab setting. We also characterized the clingfish's preference for other habitat descriptors such as the size of the rocks, biofilm, and Aufwuchs (macroalgae, encrusting invertebrates) cover, as well as grain size of underlying substrate. Northern clingfish seek shelter under rocks of 15-45 cm in size. These rocks have variable Aufwuchs cover, and gravel is the main underlying substrate type. In the intertidal, environmental conditions change with the tides, and for clingfish, the daily time under water (DTUW%) was a key parameter explaining distribution. Rather than location being determined by intertidal zonation, an 80% DTUW, a finer scale concept of tidal inundation, was required by the fish. We expect that this is likely because the mobility of the fish allows them to more closely track the ideal inundation in the marine intertidal.

  6. Instrumental color control for metallic coatings

    NASA Astrophysics Data System (ADS)

    Chou, W.; Han, Bing; Cui, Guihua; Rigg, Bryan; Luo, Ming R.

    2002-06-01

    This paper describes work investigating a suitable color quality control method for metallic coatings. A set of psychological experiments was carried out based upon 50 pairs of samples. The results were used to test the performance of various color difference formulae. Different techniques were developed by optimising the weights and/or the lightness parametric factors of colour differences calculated from the four measuring angles. The results show that the new techniques give a significant improvement compared to conventional techniques.

  7. Multi Robot Path Planning for Budgeted Active Perception with Self-Organising Maps

    DTIC Science & Technology

    2016-10-04

    Multi- Robot Path Planning for Budgeted Active Perception with Self-Organising Maps Graeme Best1, Jan Faigl2 and Robert Fitch1 Abstract— We propose a...optimise paths for a multi- robot team that aims to maximally observe a set of nodes in the environment. The selected nodes are observed by visiting...regions, each node has an observation reward, and the robots are constrained by travel budgets. The SOM algorithm jointly selects and allocates nodes

  8. Identification of Text and Symbols on a Liquid Crystal Display Part 2: Contrast and Luminance Settings to Optimise Legibility

    DTIC Science & Technology

    2009-02-01

    Measurements on Chart Design and Scoring Rule. Optometry and Vision Science, 79(12), 768-792. ISO. (1998). EN ISO 9241-11. Ergonomic Requirements for...Human Factors from the University of Queensland. He began his career designing and building computerised electronics for the theatre. Following this...to optical detection. Recent work includes the assessment of networked naval gunfire support, ergonomic assessments of combat system consoles and

  9. Optimising the manufacture, formulation, and dose of antiretroviral drugs for more cost-efficient delivery in resource-limited settings: a consensus statement.

    PubMed

    Crawford, Keith W; Ripin, David H Brown; Levin, Andrew D; Campbell, Jennifer R; Flexner, Charles

    2012-07-01

    It is expected that funding limitations for worldwide HIV treatment and prevention in resource-limited settings will continue, and, because the need for treatment scale-up is urgent, the emphasis on value for money has become an increasing priority. The Conference on Antiretroviral Drug Optimization--a collaborative project between the Clinton Health Access Initiative, the Johns Hopkins University School of Medicine, and the Bill & Melinda Gates Foundation--brought together process chemists, clinical pharmacologists, pharmaceutical scientists, physicians, pharmacists, and regulatory specialists to explore strategies for the reduction of antiretroviral drug costs. The antiretroviral drugs discussed were prioritised for consideration on the basis of their market impact, and the objectives of the conference were framed as discussion questions generated to guide scientific assessment of potential strategies. These strategies included modifications to the synthesis of the active pharmaceutical ingredient (API) and use of cheaper sources of raw materials in synthesis of these ingredients. Innovations in product formulation could improve bioavailability thus needing less API. For several antiretroviral drugs, studies show efficacy is maintained at doses below the approved dose (eg, efavirenz, lopinavir plus ritonavir, atazanavir, and darunavir). Optimising pharmacoenhancement and extending shelf life are additional strategies. The conference highlighted a range of interventions; optimum cost savings could be achieved through combining approaches. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Comprehensive quantum chemical and spectroscopic (FTIR, FT-Raman, 1H, 13C NMR) investigations of O-desmethyltramadol hydrochloride an active metabolite in tramadol - An analgesic drug

    NASA Astrophysics Data System (ADS)

    Arjunan, V.; Santhanam, R.; Marchewka, M. K.; Mohan, S.

    2014-03-01

    O-desmethyltramadol is one of the main metabolites of tramadol widely used clinically and has analgesic activity. The FTIR and FT-Raman spectra of O-desmethyl tramadol hydrochloride are recorded in the solid phase in the regions 4000-400 cm-1 and 4000-100 cm-1, respectively. The observed fundamentals are assigned to different normal modes of vibration. Theoretical studies have been performed as its hydrochloride salt. The structure of the compound has been optimised with B3LYP method using 6-31G** and cc-pVDZ basis sets. The optimised bond length and bond angles are correlated with the X-ray data. The experimental wavenumbers were compared with the scaled vibrational frequencies determined by DFT methods. The IR and Raman intensities are determined with B3LYP method using cc-pVDZ and 6-31G(d,p) basic sets. The total electron density and molecular electrostatic potential surfaces of the molecule are constructed by using B3LYP/cc-pVDZ method to display electrostatic potential (electron + nuclei) distribution. The electronic properties HOMO and LUMO energies were measured. Natural bond orbital analysis of O-desmethyltramadol hydrochloride has been performed to indicate the presence of intramolecular charge transfer. The 1H and 13C NMR chemical shifts of the molecule have been anlysed.

  11. Surface roughness evaluation on mandrels and mirror shells for future X-ray telescopes

    NASA Astrophysics Data System (ADS)

    Sironi, Giorgia; Spiga, D.

    2008-07-01

    More X-ray missions that will be operating in near future, like particular SIMBOL-X, e-Rosita, Con-X/HXT, SVOM/XIAO and Polar-X, will be based on focusing optics manufactured by means of the Ni electroforming replication technique. This production method has already been successfully exploited for SAX, XMM and Swift-XRT. Optical surfaces for X-ray reflection have to be as smooth as possible also at high spatial frequencies. Hence it will be crucial to take under control microroughness in order to reduce the scattering effects. A high rms microroughness would cause the degradation of the angular resolution and loss of effective area. Stringent requirements have therefore to be fixed for mirror shells surface roughness depending on the specific energy range investigated, and roughness evolution has to be carefully monitored during the subsequent steps of the mirror-shells realization. This means to study the roughness evolution in the chain mandrel, mirror shells, multilayer deposition and also the degradation of mandrel roughness following iterated replicas. Such a study allows inferring which phases of production are the major responsible of the roughness growth and could help to find solutions optimizing the involved processes. The exposed study is carried out in the context of the technological consolidation related to SIMBOL-X, along with a systematic metrological study of mandrels and mirror shells. To monitor the roughness increase following each replica, a multiinstrumental approach was adopted: microprofiles were analysed by means of their Power Spectral Density (PSD) in the spatial frequency range 1000-0.01 μm. This enables the direct comparison of roughness data taken with instruments characterized by different operative ranges of frequencies, and in particular optical interferometers and Atomic Force Microscopes. The performed analysis allowed us to set realistic specifications on the mandrel roughness to be achieved, and to suggest a limit for the maximum number of a replica a mandrel can undergo before being refurbished.

  12. Automated extraction of decision rules for leptin dynamics--a rough sets approach.

    PubMed

    Brtka, Vladimir; Stokić, Edith; Srdić, Biljana

    2008-08-01

    A significant area in the field of medical informatics is concerned with the learning of medical models from low-level data. The goals of inducing models from data are twofold: analysis of the structure of the models so as to gain new insight into the unknown phenomena, and development of classifiers or outcome predictors for unseen cases. In this paper, we will employ approach based on the relation of indiscernibility and rough sets theory to study certain questions concerning the design of model based on if-then rules, from low-level data including 36 parameters, one of them leptin. To generate easy to read, interpret, and inspect model, we have used ROSETTA software system. The main goal of this work is to get new insight into phenomena of leptin levels while interplaying with other risk factors in obesity.

  13. Impact of the ongoing Amazonian deforestation on local precipitation: A GCM simulation study

    NASA Technical Reports Server (NTRS)

    Walker, G. K.; Sud, Y. C.; Atlas, R.

    1995-01-01

    Numerical simulation experiments were conducted to delineate the influence of in situ deforestation data on episodic rainfall by comparing two ensembles of five 5-day integrations performed with a recent version of the Goddard Laboratory for Atmospheres General Circulation Model (GCM) that has a simple biosphere model (SiB). The first set, called control cases, used the standard SiB vegetation cover (comprising 12 biomes) and assumed a fully forested Amazonia, while the second set, called deforestation cases, distinguished the partially deforested regions of Amazonia as savanna. Except for this difference, all other initial and prescribed boundary conditions were kept identical in both sets of integrations. The differential analyses of these five cases show the following local effects of deforestation. (1) A discernible decrease in evapotranspiration of about 0.80 mm/d (roughly 18%) that is quite robust in the averages for 1-, 2-, and 5-day forecasts. (2) A decrease in precipitation of about 1.18 mm/d (roughly 8%) that begins to emerge even in 1-2 day averages and exhibits complex evolution that extends downstream with the winds. (3) A significant decrease in the surface drag force (as a consequence of reduced surface roughness of deforested regions) that, in turn, affects the dynamical structure of moisture convergence and circulation. The surface winds increase significantly during the first day, and thereafter the increase is well maintained even in the 2- and 5-day averages.

  14. Effect of tillage system and cumulative rainfall on multifractal parameters of soil surface microrelief

    NASA Astrophysics Data System (ADS)

    Vidal Vázquez, E.; Miranda, J. G. V.; Mirás-Avalos, J. M.; Díaz, M. C.; Paz-Ferreiro, J.

    2009-04-01

    Mathematical description of the spatial characteristics of soil surface microrelief still remains a challenge. Soil surface roughness parameters are required for modelling overland flow and erosion. The objective of this work was to evaluate the potential of multifractal for analyzing the decay of initial surface roughness induced by natural rainfall under different soil tillage systems. Field experiments were performed on an Oxisol at Campinas, São Paulo State (Brazil). Six tillage treatments, namely, disc harrow, disc plow, chisel plow, disc harrow + disc level, disc plow + disc level and chisel plow + disc level were tested. In each plot soil surface microrelief was measured for times, with increasing amounts of natural rainfall using a pinmeter. The sampling scheme was a square grid with 25 x 25 mm point spacing and the plot size was 1350 x 1350 mm, so that each data set consisted of 3025 individual elevation points. Duplicated measurements were taken per treatment and date, yielding a total of 48 experimental data sets. All the investigated microrelief data sets exhibited, in general, scale properties, and the degree of multifractality showed wide differences between them. Multifractal analysis distinguishes two different patterns of soil surface microrelief, the first one has features close to monofractal spectra and the second clearly indicates multifractal behavior. Both, singularity spectra and generalized dimension spectra allow differentiating between soil tillage systems. In general, changes in values of multifractal parameters under simulated rainfall showed no or little correspondence with the evolution of the vertical microrelief component described by indices such as the standard deviation of the point height measurements. Multifractal parameters provided valuable information for chararacterizing the spatial features of soil surface microrelief as they were able to discriminate data sets with similar values for the vertical component of roughness.

  15. Tribological Properties of PVD Ti/C-N Nanocoatnigs

    NASA Astrophysics Data System (ADS)

    Leitans, A.; Lungevics, J.; Rudzitis, J.; Filipovs, A.

    2017-04-01

    The present paper discusses and analyses tribological properties of various coatings that increase surface wear resistance. Four Ti/C-N nanocoatings with different coating deposition settings are analysed. Tribological and metrological tests on the samples are performed: 2D and 3D parameters of the surface roughness are measured with modern profilometer, and friction coefficient is measured with CSM Instruments equipment. Roughness parameters Ra, Sa, Sz, Str, Sds, Vmp, Vmc and friction coefficient at 6N load are determined during the experiment. The examined samples have many pores, which is the main reason for relatively large values of roughness parameter. A slight wear is identified in all four samples as well; its friction coefficient values range from 0,.21 to 0.29. Wear rate values are not calculated for the investigated coatings, as no expressed tribotracks are detected on the coating surface.

  16. Reduction of Surface Roughness by Means of Laser Processing over Additive Manufacturing Metal Parts.

    PubMed

    Alfieri, Vittorio; Argenio, Paolo; Caiazzo, Fabrizia; Sergi, Vincenzo

    2016-12-31

    Optimization of processing parameters and exposure strategies is usually performed in additive manufacturing to set up the process; nevertheless, standards for roughness may not be evenly matched on a single complex part, since surface features depend on the building direction of the part. This paper aims to evaluate post processing treating via laser surface modification by means of scanning optics and beam wobbling to process metal parts resulting from selective laser melting of stainless steel in order to improve surface topography. The results are discussed in terms of roughness, geometry of the fusion zone in the cross-section, microstructural modification, and microhardness so as to assess the effects of laser post processing. The benefits of beam wobbling over linear scanning processing are shown, as heat effects in the base metal are proven to be lower.

  17. Reduction of Surface Roughness by Means of Laser Processing over Additive Manufacturing Metal Parts

    PubMed Central

    Alfieri, Vittorio; Argenio, Paolo; Caiazzo, Fabrizia; Sergi, Vincenzo

    2016-01-01

    Optimization of processing parameters and exposure strategies is usually performed in additive manufacturing to set up the process; nevertheless, standards for roughness may not be evenly matched on a single complex part, since surface features depend on the building direction of the part. This paper aims to evaluate post processing treating via laser surface modification by means of scanning optics and beam wobbling to process metal parts resulting from selective laser melting of stainless steel in order to improve surface topography. The results are discussed in terms of roughness, geometry of the fusion zone in the cross-section, microstructural modification, and microhardness so as to assess the effects of laser post processing. The benefits of beam wobbling over linear scanning processing are shown, as heat effects in the base metal are proven to be lower. PMID:28772380

  18. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    PubMed

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.

  19. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    PubMed

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  20. On the stability of von Kármán rotating-disk boundary layers with radial anisotropic surface roughness

    NASA Astrophysics Data System (ADS)

    Garrett, S. J.; Cooper, A. J.; Harris, J. H.; Özkan, M.; Segalini, A.; Thomas, P. J.

    2016-01-01

    We summarise results of a theoretical study investigating the distinct convective instability properties of steady boundary-layer flow over rough rotating disks. A generic roughness pattern of concentric circles with sinusoidal surface undulations in the radial direction is considered. The goal is to compare predictions obtained by means of two alternative, and fundamentally different, modelling approaches for surface roughness for the first time. The motivating rationale is to identify commonalities and isolate results that might potentially represent artefacts associated with the particular methodologies underlying one of the two modelling approaches. The most significant result of practical relevance obtained is that both approaches predict overall stabilising effects on type I instability mode of rotating disk flow. This mode leads to transition of the rotating-disk boundary layer and, more generally, the transition of boundary-layers with a cross-flow profile. Stabilisation of the type 1 mode means that it may be possible to exploit surface roughness for laminar-flow control in boundary layers with a cross-flow component. However, we also find differences between the two sets of model predictions, some subtle and some substantial. These will represent criteria for establishing which of the two alternative approaches is more suitable to correctly describe experimental data when these become available.

  1. A Fractal Interpretation of Controlled-Source Helicopter Electromagnetic Survey Data: Seco Creek, Edwards Aquifer, TX

    NASA Astrophysics Data System (ADS)

    Decker, K. T.; Everett, M. E.

    2009-12-01

    The Edwards aquifer lies in the structurally complex Balcones fault zone and supplies water to the growing city of San Antonio. To ensure that future demands for water are met, the hydrological and geophysical properties of the aquifer must be well-understood. In most settings, fracture lengths and displacements occur in power-law distributions. Fracture distribution plays an important role in determining electrical and hydraulic current flowpaths. 1-D synthetic models of the controlled-source electromagnetic (CSEM) response for layered models with a fractured layer at depth described by the roughness parameter βV, such that 0≤βV<1, associated with the power-law length-scale dependence of electrical conductivity are developed. A value of βV = 0 represents homogeneous, continuous media, while a value of 0<βV<1 shows that roughness exists. The Seco Creek frequency-domain helicopter electromagnetic survey data set is analyzed by introducing the similarly defined roughness parameter βH to detect lateral roughness along survey lines. Fourier transforming the apparent resistivity as a function of position along flight line into wavenumber domain using a 256-point sliding window gives the power spectral density (PSD) plot for each line. The value of βH is the slope of the least squares regression for the PSD in each 256-point window. Changes in βH with distance along the flight line are plotted. Large values of βH are found near well-known large fractures and maps of βH produced by interpolating values of βH along survey lines suggest previously undetected structure at depth.

  2. Grassland futures in Great Britain - Productivity assessment and scenarios for land use change opportunities.

    PubMed

    Qi, Aiming; Holland, Robert A; Taylor, Gail; Richter, Goetz M

    2018-09-01

    To optimise trade-offs provided by future changes in grassland use intensity, spatially and temporally explicit estimates of respective grassland productivities are required at the systems level. Here, we benchmark the potential national availability of grassland biomass, identify optimal strategies for its management, and investigate the relative importance of intensification over reversion (prioritising productivity versus environmental ecosystem services). Process-conservative meta-models for different grasslands were used to calculate the baseline dry matter yields (DMY; 1961-1990) at 1km 2 resolution for the whole UK. The effects of climate change, rising atmospheric [CO 2 ] and technological progress on baseline DMYs were used to estimate future grassland productivities (up to 2050) for low and medium CO 2 emission scenarios of UKCP09. UK benchmark productivities of 12.5, 8.7 and 2.8t/ha on temporary, permanent and rough-grazing grassland, respectively, accounted for productivity gains by 2010. By 2050, productivities under medium emission scenario are predicted to increase to 15.5 and 9.8t/ha on temporary and permanent grassland, respectively, but not on rough grassland. Based on surveyed grassland distributions for Great Britain in 2010 the annual availability of grassland biomass is likely to rise from 64 to 72milliontonnes by 2050. Assuming optimal N application could close existing productivity gaps of ca. 40% a range of management options could deliver additional 21∗10 6 tonnes of biomass available for bioenergy. Scenarios of changes in grassland use intensity demonstrated considerable scope for maintaining or further increasing grassland production and sparing some grassland for the provision of environmental ecosystem services. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  4. Quantification of tillage, plant cover, and cumulative rainfall effects on soil surface microrelief by statistical, geostatistical and fractal indices

    NASA Astrophysics Data System (ADS)

    Paz-Ferreiro, J.; Bertol, I.; Vidal Vázquez, E.

    2008-07-01

    Changes in soil surface microrelief with cumulative rainfall under different tillage systems and crop cover conditions were investigated in southern Brazil. Surface cover was none (fallow) or the crop succession maize followed by oats. Tillage treatments were: 1) conventional tillage on bare soil (BS), 2) conventional tillage (CT), 3) minimum tillage (MT) and 4) no tillage (NT) under maize and oats. Measurements were taken with a manual relief meter on small rectangular grids of 0.234 and 0.156 m2, throughout growing season of maize and oats, respectively. Each data set consisted of 200 point height readings, the size of the smallest cells being 3×5 cm during maize and 2×5 cm during oats growth periods. Random Roughness (RR), Limiting Difference (LD), Limiting Slope (LS) and two fractal parameters, fractal dimension (D) and crossover length (l) were estimated from the measured microtopographic data sets. Indices describing the vertical component of soil roughness such as RR, LD and l generally decreased with cumulative rain in the BS treatment, left fallow, and in the CT and MT treatments under maize and oats canopy. However, these indices were not substantially affected by cumulative rain in the NT treatment, whose surface was protected with previous crop residues. Roughness decay from initial values was larger in the BS treatment than in CT and MT treatments. Moreover, roughness decay generally tended to be faster under maize than under oats. The RR and LD indices decreased quadratically, while the l index decreased exponentially in the tilled, BS, CT and MT treatments. Crossover length was sensitive to differences in soil roughness conditions allowing a description of microrelief decay due to rainfall in the tilled treatments, although better correlations between cumulative rainfall and the most commonly used indices RR and LD were obtained. At the studied scale, parameters l and D have been found to be useful in interpreting the configuration properties of the soil surface microrelief.

  5. With a little help from a computer: discriminating between bacterial and viral meningitis based on dominance-based rough set approach analysis

    PubMed Central

    Gowin, Ewelina; Januszkiewicz-Lewandowska, Danuta; Słowiński, Roman; Błaszczyński, Jerzy; Michalak, Michał; Wysocki, Jacek

    2017-01-01

    Abstract Differential Diagnosis of bacterial and viral meningitis remains an important clinical problem. A number of methods to assist in the diagnoses of meningitis have been developed, but none of them have been found to have high specificity with 100% sensitivity. We conducted a retrospective analysis of the medical records of 148 children hospitalized in St. Joseph Children's Hospital in Poznań. In this study, we applied for the first time the original methodology of dominance-based rough set approach (DRSA) to diagnostic patterns of meningitis data and represented them by decision rules useful in discriminating between bacterial and viral meningitis. The induction algorithm is called VC-DomLEM; it has been implemented as software package called jMAF (http://www.cs.put.poznan.pl/jblaszczynski/Site/jRS.html), based on java Rough Set (jRS) library. In the studied group, there were 148 patients (78 boys and 70 girls), and the mean age was 85 months. We analyzed 14 attributes, of which only 4 were used to generate the 6 rules, with C-reactive protein (CRP) being the most valuable. Factors associated with bacterial meningitis were: CRP level ≥86 mg/L, number of leukocytes in cerebrospinal fluid (CSF) ≥4481 μL−1, symptoms duration no longer than 2 days, or age less than 1 month. Factors associated with viral meningitis were CRP level not higher than 19 mg/L, or CRP level not higher than 84 mg/L in a patient older than 11 months with no more than 1100 μL−1 leukocytes in CSF. We established the minimum set of attributes significant for classification of patients with meningitis. This is new set of rules, which, although intuitively anticipated by some clinicians, has not been formally demonstrated until now. PMID:28796045

  6. With a little help from a computer: discriminating between bacterial and viral meningitis based on dominance-based rough set approach analysis.

    PubMed

    Gowin, Ewelina; Januszkiewicz-Lewandowska, Danuta; Słowiński, Roman; Błaszczyński, Jerzy; Michalak, Michał; Wysocki, Jacek

    2017-08-01

    Differential Diagnosis of bacterial and viral meningitis remains an important clinical problem. A number of methods to assist in the diagnoses of meningitis have been developed, but none of them have been found to have high specificity with 100% sensitivity.We conducted a retrospective analysis of the medical records of 148 children hospitalized in St. Joseph Children's Hospital in Poznań. In this study, we applied for the first time the original methodology of dominance-based rough set approach (DRSA) to diagnostic patterns of meningitis data and represented them by decision rules useful in discriminating between bacterial and viral meningitis. The induction algorithm is called VC-DomLEM; it has been implemented as software package called jMAF (http://www.cs.put.poznan.pl/jblaszczynski/Site/jRS.html), based on java Rough Set (jRS) library.In the studied group, there were 148 patients (78 boys and 70 girls), and the mean age was 85 months. We analyzed 14 attributes, of which only 4 were used to generate the 6 rules, with C-reactive protein (CRP) being the most valuable.Factors associated with bacterial meningitis were: CRP level ≥86 mg/L, number of leukocytes in cerebrospinal fluid (CSF) ≥4481 μL, symptoms duration no longer than 2 days, or age less than 1 month. Factors associated with viral meningitis were CRP level not higher than 19 mg/L, or CRP level not higher than 84 mg/L in a patient older than 11 months with no more than 1100 μL leukocytes in CSF.We established the minimum set of attributes significant for classification of patients with meningitis. This is new set of rules, which, although intuitively anticipated by some clinicians, has not been formally demonstrated until now.

  7. Contact stiffness of regularly patterned multi-asperity interfaces

    NASA Astrophysics Data System (ADS)

    Li, Shen; Yao, Quanzhou; Li, Qunyang; Feng, Xi-Qiao; Gao, Huajian

    2018-02-01

    Contact stiffness is a fundamental mechanical index of solid surfaces and relevant to a wide range of applications. Although the correlation between contact stiffness, contact size and load has long been explored for single-asperity contacts, our understanding of the contact stiffness of rough interfaces is less clear. In this work, the contact stiffness of hexagonally patterned multi-asperity interfaces is studied using a discrete asperity model. We confirm that the elastic interaction among asperities is critical in determining the mechanical behavior of rough contact interfaces. More importantly, in contrast to the common wisdom that the interplay of asperities is solely dictated by the inter-asperity spacing, we show that the number of asperities in contact (or equivalently, the apparent size of contact) also plays an indispensable role. Based on the theoretical analysis, we propose a new parameter for gauging the closeness of asperities. Our theoretical model is validated by a set of experiments. To facilitate the application of the discrete asperity model, we present a general equation for contact stiffness estimation of regularly rough interfaces, which is further proved to be applicable for interfaces with single-scale random roughness.

  8. Comparative Evaluation of Conventional and Accelerated Castings on Marginal Fit and Surface Roughness

    PubMed Central

    Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad

    2017-01-01

    Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726

  9. Roughness Based Crossflow Transition Control for a Swept Airfoil Design Relevant to Subsonic Transports

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Carpenter, Mark H.; Malik, Mujeeb R.; Eppink, Jenna; Chang, Chau-Lyan; Streett, Craig L.

    2010-01-01

    A high fidelity transition prediction methodology has been applied to a swept airfoil design at a Mach number of 0.75 and chord Reynolds number of approximately 17 million, with the dual goal of an assessment of the design for the implementation and testing of roughness based crossflow transition control and continued maturation of such methodology in the context of realistic aerodynamic configurations. Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes in order to weaken the growth of naturally occurring, linearly more unstable instability modes via a nonlinear modification of the mean boundary layer profiles. Therefore, a synthesis of receptivity, linear and nonlinear growth of crossflow disturbances, and high-frequency secondary instabilities becomes desirable to model this form of control. Because experimental data is currently unavailable for passive crossflow transition control for such high Reynolds number configurations, a holistic computational approach is used to assess the feasibility of roughness based control methodology. Potential challenges inherent to this control application as well as associated difficulties in modeling this form of control in a computational setting are highlighted. At high Reynolds numbers, a broad spectrum of stationary crossflow disturbances amplify and, while it may be possible to control a specific target mode using Discrete Roughness Elements (DREs), nonlinear interaction between the control and target modes may yield strong amplification of the difference mode that could have an adverse impact on the transition delay using spanwise periodic roughness elements.

  10. Topology optimisation for natural convection problems

    NASA Astrophysics Data System (ADS)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-12-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  11. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.

  12. Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.

    PubMed

    Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D

    2011-12-12

    We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America

  13. Warpage analysis on thin shell part using glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    The Autodesk Moldflow Insight (AMI) software was used in this study to focuses on the analysis in plastic injection moulding process associate the input parameter and output parameter. The material used in this study is Acrylonitrile Butadiene Styrene (ABS) as the moulded material to produced the plastic part. The MATLAB sortware is a method was used to find the best setting parameter. The variables was selected in this study were melt temperature, packing pressure, coolant temperature and cooling time.

  14. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  15. Abrasive wear of resin composites as related to finishing and polishing procedures.

    PubMed

    Turssi, Cecilia P; Ferracane, Jack L; Serra, Mônica C

    2005-07-01

    Finishing and polishing procedures may cause topographical changes and introduce subsurface microcracks in dental composite restoratives. Since both of these effects may contribute toward the kinetics of wear, the purpose of this study was to assess and correlate the wear and surface roughness of minifilled and nanofilled composites finished and polished by different methods. Specimens (n=10) made of a minifilled and a nanofilled composite were finished and polished with one of the four sequences: (1) tungsten carbide burs plus Al(2)O(3)-impregnated brush (CbBr) or (2) tungsten carbide burs plus diamond-impregnated cup (CbCp), (3) diamond burs plus brush (DmBr) or (4) diamond burs plus cup (DmCp). As a control, abrasive papers were used. After surface roughness had been quantified, three-body abrasion was simulated using the OHSU wear machine. The wear facets were then scanned to measure wear depth and post-testing roughness. All sets of data were subjected to ANOVA and Tukey's tests (alpha=0.05). Pearson's correlation test was applied to check for the existence of a relationship between pre-testing roughness and wear. Significantly smoother surfaces were attained with the sequences CbBr and CbCp, whereas DmCp yielded the roughest surface. Regardless of the finishing/polishing technique, the nanofilled composite exhibited the lowest pre-testing roughness and wear. There was no correlation between the surface roughness achieved after finishing/polishing procedures and wear (p=0.3899). Nano-sized materials may have improved abrasive wear resistance over minifilled composites. The absence of correlation between wear and surface roughness produced by different finishing/polishing methods suggests that the latter negligibly influences material loss due to three-body abrasion.

  16. Optimising ICT Effectiveness in Instruction and Learning: Multilevel Transformation Theory and a Pilot Project in Secondary Education

    ERIC Educational Resources Information Center

    Mooij, Ton

    2004-01-01

    Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…

  17. Effect of artificial toothbrushing and water storage on the surface roughness and micromechanical properties of tooth-colored CAD-CAM materials.

    PubMed

    Flury, Simon; Diebold, Elisabeth; Peutzfeldt, Anne; Lussi, Adrian

    2017-06-01

    Because of the different composition of resin-ceramic computer-aided design and computer-aided manufacturing (CAD-CAM) materials, their polishability and their micromechanical properties vary. Moreover, depending on the composition of the materials, their surface roughness and micromechanical properties are likely to change with time. The purpose of this in vitro study was to investigate the effect of artificial toothbrushing and water storage on the surface roughness (Ra and Rz) and the micromechanical properties, surface hardness (Vickers [VHN]) and indentation modulus (E IT ), of 5 different tooth-colored CAD-CAM materials when polished with 2 different polishing systems. Specimens (n=40 per material) were cut from a composite resin (Paradigm MZ100; 3M ESPE), a feldspathic ceramic (Vitablocs Mark II; Vita Zahnfabrik), a resin nanoceramic (Lava Ultimate; 3M ESPE), a hybrid dental ceramic (Vita Enamic; Vita Zahnfabrik), and a nanocomposite resin (Ambarino High-Class; Creamed). All specimens were roughened in a standardized manner and polished either with Sof-Lex XT discs or the Vita Polishing Set Clinical. Surface roughness, VHN, and E IT were measured after polishing and after storage for 6 months (tap water, 37°C) with periodic, artificial toothbrushing. The surface roughness, VHN, and E IT results were analyzed with a nonparametric ANOVA followed by Kruskal-Wallis and exact Wilcoxon rank sum tests (α=.05). Irrespective of polishing system and of artificial toothbrushing and storage, Lava Ultimate generally showed the lowest surface roughness and Vitablocs Mark II the highest. As regards micromechanical properties, the following ranking of the CAD-CAM materials was found (from highest VHN/E IT to lowest VHN/E IT ): Vitablocs Mark II > Vita Enamic > Paradigm MZ100 > Lava Ultimate > Ambarino High-Class. Irrespective of material and of artificial toothbrushing and storage, polishing with Sof-Lex XT discs resulted in lower surface roughness than the Vita Polishing Set Clinical (P≤.016). However, the polishing system generally had no influence on the micromechanical properties (P>.05). The effect of artificial toothbrushing and storage on surface roughness depended on the material and the polishing system: Ambarino High-Class was most sensitive to storage, Lava Ultimate and Vita Enamic were least sensitive. Artificial toothbrushing and storage generally resulted in a decrease in VHN and E IT for Paradigm MZ100, Lava Ultimate, and Ambarino High-Class but not for Vita Enamic and Vitablocs Mark II. Tooth-colored CAD-CAM materials with lower VHN and E IT generally showed better polishability. However, these materials were more prone to degradation by artificial toothbrushing and water storage than materials with higher VHN and E IT . Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  18. Effect of repeated sterilization by different methods on strength of carbon fiber rods used in external fixator systems.

    PubMed

    Unal, Omer Kays; Poyanli, Oguz Sukru; Unal, Ulku Sur; Mutlu, Hasan Huseyin; Ozkut, Afsar Timucin; Esenkaya, Irfan

    2018-05-16

    We set out to reveal the effects of repeated sterilization, using different methods, on the carbon fiber rods of external fixator systems. We used a randomized set of forty-four unused, unsterilized, and identical carbon fiber rods (11 × 200 mm), randomly assigned to two groups: unsterilized (US) (4 rods) and sterilized (40 rods). The sterilized rods were divided into two groups, those sterilized in an autoclave (AC) and by hydrogen peroxide (HP). These were further divided into five subgroups based on the number of sterilization repetition to which the fibers were subjected (25-50-75-100-200). A bending test was conducted to measure the maximum bending force (MBF), maximum deflection (MD), flexural strength (FS), maximum bending moment (MBM) and bending rigidity (BR). We also measured the surface roughness of the rods. An increase in the number of sterilization repetition led to a decrease in MBF, MBM, FS, BR, but increased MD and surface roughness (p < 0.01). The effect of the number of sterilization repetition was more prominent in the HP group. This study revealed that the sterilization method and number of sterilization repetition influence the strength of the carbon fiber rods. Increasing the number of sterilization repetition degrades the strength and roughness of the rods.

  19. Perspectives on condom breakage: a qualitative study of female sex workers in Bangalore, India.

    PubMed

    Gurav, Kaveri; Bradley, Janet; Chandrashekhar Gowda, G; Alary, Michel

    2014-01-01

    A qualitative study was conducted to obtain a detailed understanding of two key determinants of condom breakage - 'rough sex' and poor condom fit - identified in a recent telephone survey of female sex workers, in Bangalore, India. Transcripts from six focus-group discussions involving 35 female sex workers who reported condom breakage during the telephone survey were analysed. Rough sex in different forms, from over-exuberance to violence, was often described by sex workers as a result of clients' inebriation and use of sexual stimulants, which, they report, cause tumescence, excessive thrusting and sex that lasts longer than usual, thereby increasing the risk of condom breakage. Condom breakage in this setting is the result of a complex set of social situations involving client behaviours and power dynamics that has the potential to put the health and personal lives of sex workers at risk. These findings and their implications for programme development are discussed.

  20. [Study on the optimization of monitoring indicators of drinking water quality during health supervision].

    PubMed

    Ye, Bixiong; E, Xueli; Zhang, Lan

    2015-01-01

    To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.

  1. Preference Mining Using Neighborhood Rough Set Model on Two Universes.

    PubMed

    Zeng, Kai

    2016-01-01

    Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.

  2. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    PubMed

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  3. Processing and filtrating of driver fatigue characteristic parameters based on rough set

    NASA Astrophysics Data System (ADS)

    Ye, Wenwu; Zhao, Xuyang

    2018-05-01

    With the rapid development of economy, people become increasingly rich, and cars have become a common means of transportation in daily life. However, the problem of traffic safety is becoming more and more serious. And fatigue driving is one of the main causes of traffic accidents. Therefore, it is of great importance for us to study the detection of fatigue driving to improve traffic safety. In the cause of determining whether the driver is tired, the characteristic quantity related to the steering angle of the steering wheel and the characteristic quantity of the driver's pulse are all important indicators. The fuzzy c-means clustering is used to discretize the above indexes. Because the characteristic parameters are too miscellaneous, rough set is used to filtrate these characteristics. Finally, this paper finds out the highest correlation with fatigue driving. It is proved that these selected characteristics are of great significance to the evaluation of fatigue driving.

  4. High-frequency Born synthetic seismograms based on coupled normal modes

    USGS Publications Warehouse

    Pollitz, Fred F.

    2011-01-01

    High-frequency and full waveform synthetic seismograms on a 3-D laterally heterogeneous earth model are simulated using the theory of coupled normal modes. The set of coupled integral equations that describe the 3-D response are simplified into a set of uncoupled integral equations by using the Born approximation to calculate scattered wavefields and the pure-path approximation to modulate the phase of incident and scattered wavefields. This depends upon a decomposition of the aspherical structure into smooth and rough components. The uncoupled integral equations are discretized and solved in the frequency domain, and time domain results are obtained by inverse Fourier transform. Examples show the utility of the normal mode approach to synthesize the seismic wavefields resulting from interaction with a combination of rough and smooth structural heterogeneities. This approach is applied to an ∼4 Hz shallow crustal wave propagation around the site of the San Andreas Fault Observatory at Depth (SAFOD).

  5. Semi-active suspension for automotive application

    NASA Astrophysics Data System (ADS)

    Venhovens, Paul J. T.; Devlugt, Alex R.

    The theoretical considerations for semi-active damping system evaluation, with respect to semi-active suspension and Kalman filtering, are discussed in terms of the software. Some prototype hardware developments are proposed. A significant improvement in ride comfort performance can be obtained, indicated by root mean square body acceleration values and frequency responses, using a switchable damper system with two settings. Nevertheless the improvement is accompanied by an increase in dynamic tire load variations. The main benefit of semi-active suspensions is the potential of changing the low frequency section of the transfer function. In practice this will support the impression of extra driving stability. It is advisable to apply an adaptive control strategy like the (extended) skyhook version switching more to the 'comfort' setting for straight (and smooth/moderate roughness) road running and switching to 'road holding' for handling maneuvers and possibly rough roads and discrete, severe events like potholes.

  6. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  7. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  8. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  9. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  10. Medicines optimisation: priorities and challenges.

    PubMed

    Kaufman, Gerri

    2016-03-23

    Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.

  11. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  12. Optimising rigid motion compensation for small animal brain PET imaging

    NASA Astrophysics Data System (ADS)

    Spangler-Bickell, Matthew G.; Zhou, Lin; Kyme, Andre Z.; De Laat, Bart; Fulton, Roger R.; Nuyts, Johan

    2016-10-01

    Motion compensation (MC) in PET brain imaging of awake small animals is attracting increased attention in preclinical studies since it avoids the confounding effects of anaesthesia and enables behavioural tests during the scan. A popular MC technique is to use multiple external cameras to track the motion of the animal’s head, which is assumed to be represented by the motion of a marker attached to its forehead. In this study we have explored several methods to improve the experimental setup and the reconstruction procedures of this method: optimising the camera-marker separation; improving the temporal synchronisation between the motion tracker measurements and the list-mode stream; post-acquisition smoothing and interpolation of the motion data; and list-mode reconstruction with appropriately selected subsets. These techniques have been tested and verified on measurements of a moving resolution phantom and brain scans of an awake rat. The proposed techniques improved the reconstructed spatial resolution of the phantom by 27% and of the rat brain by 14%. We suggest a set of optimal parameter values to use for awake animal PET studies and discuss the relative significance of each parameter choice.

  13. How to assess vision.

    PubMed

    Marsden, Janet

    2016-09-21

    Rationale and key points An objective assessment of the patient's vision is important to assess variation from 'normal' vision in acute and community settings, to establish a baseline before examination and treatment in the emergency department, and to assess any changes during ophthalmic outpatient appointments. » Vision is one of the essential senses that permits people to make sense of the world. » Visual assessment does not only involve measuring central visual acuity, it also involves assessing the consequences of reduced vision. » Assessment of vision in children is crucial to identify issues that might affect vision and visual development, and to optimise lifelong vision. » Untreatable loss of vision is not an inevitable consequence of ageing. » Timely and repeated assessment of vision over life can reduce the incidence of falls, prevent injury and optimise independence. Reflective activity 'How to' articles can help update you practice and ensure it remains evidence based. Apply this article to your practice. Reflect on and write a short account of: 1. How this article might change your practice when assessing people holistically. 2. How you could use this article to educate your colleagues in the assessment of vision.

  14. Medication management policy, practice and research in Australian residential aged care: Current and future directions.

    PubMed

    Sluggett, Janet K; Ilomäki, Jenni; Seaman, Karla L; Corlis, Megan; Bell, J Simon

    2017-02-01

    Eight percent of Australians aged 65 years and over receive residential aged care each year. Residents are increasingly older, frailer and have complex care needs on entry to residential aged care. Up to 63% of Australian residents of aged care facilities take nine or more medications regularly. Together, these factors place residents at high risk of adverse drug events. This paper reviews medication-related policies, practices and research in Australian residential aged care. Complex processes underpin prescribing, supply and administration of medications in aged care facilities. A broad range of policies and resources are available to assist health professionals, aged care facilities and residents to optimise medication management. These include national guiding principles, a standardised national medication chart, clinical medication reviews and facility accreditation standards. Recent Australian interventions have improved medication use in residential aged care facilities. Generating evidence for prescribing and deprescribing that is specific to residential aged care, health workforce reform, medication-related quality indicators and inter-professional education in aged care are important steps toward optimising medication use in this setting. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Relative electronic and free energies of octane's unique conformations

    NASA Astrophysics Data System (ADS)

    Kirschner, Karl N.; Heiden, Wolfgang; Reith, Dirk

    2017-06-01

    This study reports the geometries and electronic energies of n-octane's unique conformations using perturbation methods that best mimic CCSD(T) results. In total, the fully optimised minima of n-butane (2 conformations), n-pentane (4 conformations), n-hexane (12 conformations) and n-octane (96 conformations) were investigated at several different theory levels and basis sets. We find that DF-MP2.5/aug-cc-pVTZ is in very good agreement with the more expensive CCSD(T) results. At this level, we can clearly confirm the 96 stable minima which were previously found using a reparameterised density functional theory (DFT). Excellent agreement was found between their DFT results and our DF-MP2.5 perturbation results. Subsequent Gibbs free energy calculations, using scaled MP2/aug-cc-pVTZ zero-point vibrational energy and frequencies, indicate a significant temperature dependency of the relative energies, with a change in the predicted global minimum. The results of this work will be important for future computational investigations of fuel-related octane reactions and for optimisation of molecular force fields (e.g. lipids).

  16. Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution

    NASA Astrophysics Data System (ADS)

    Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.

    2017-08-01

    Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.

  17. Experimental test of an online ion-optics optimizer

    NASA Astrophysics Data System (ADS)

    Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.

    2018-07-01

    A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.

  18. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    PubMed

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  19. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  20. Digital radiography: are the manufacturers' settings too high? Optimisation of the Kodak digital radiography system with aid of the computed radiography dose index.

    PubMed

    Peters, Sinead E; Brennan, Patrick C

    2002-09-01

    Manufacturers offer exposure indices as a safeguard against overexposure in computed radiography, but the basis for recommended values is unclear. This study establishes an optimum exposure index to be used as a guideline for a specific CR system to minimise radiation exposures for computed mobile chest radiography, and compares this with manufacturer guidelines and current practice. An anthropomorphic phantom was employed to establish the minimum milliamperes consistent with acceptable image quality for mobile chest radiography images. This was found to be 2 mAs. Consecutively, 10 patients were exposed with this optimised milliampere value and 10 patients were exposed with the 3.2 mAs routinely used in the department of the study. Image quality was objectively assessed using anatomical criteria. Retrospective analyses of 717 exposure indices recorded over 2 months from mobile chest examinations were performed. The optimised milliampere value provided a significant reduction of the average exposure index from 1840 to 1570 ( p<0.0001). This new "optimum" exposure index is substantially lower than manufacturer guidelines of 2000 and significantly lower than exposure indices from the retrospective study (1890). Retrospective data showed a significant increase in exposure indices if the examination was performed out of hours. The data provided by this study emphasise the need for clinicians and personnel to consider establishing their own optimum exposure indices for digital investigations rather than simply accepting manufacturers' guidelines. Such an approach, along with regular monitoring of indices, may result in a substantial reduction in patient exposure.

  1. Selecting a climate model subset to optimise key ensemble properties

    NASA Astrophysics Data System (ADS)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  2. SU-F-J-16: Planar KV Imaging Dose Reduction Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gershkevitsh, E; Zolotuhhin, D

    Purpose: IGRT has become an indispensable tool in modern radiotherapy with kV imaging used in many departments due to superior image quality and lower dose when compared to MV imaging. Many departments use manufacturer supplied protocols for imaging which are not always optimised between image quality and radiation dose (ALARA). Methods: Whole body phantom PBU-50 (Kyoto Kagaku ltd., Japan) for imaging in radiology has been imaged on Varian iX accelerator (Varian Medical Systems, USA) with OBI 1.5 system. Manufacturer’s default protocols were adapted by modifying kV and mAs values when imaging different anatomical regions of the phantom (head, thorax, abdomen,more » pelvis, extremities). Images with different settings were independently reviewed by two persons and their suitability for IGRT set-up correction protocols were evaluated. The suitable images with the lowest mAs were then selected. The entrance surface dose (ESD) for manufacturer’s default protocols and modified protocols were measured with RTI Black Piranha (RTI Group, Sweden) and compared. Image quality was also measured with kVQC phantom (Standard Imaging, USA) for different protocols. The modified protocols have been applied for clinical work. Results: For most cases optimized protocols reduced the ESD on average by a factor of 3(range 0.9–8.5). Further reduction in ESD has been observed by applying bow-tie filter designed for CBCT. The largest reduction in dose (12.2 times) was observed for Thorax lateral protocol. The dose was slightly increased (by 10%) for large pelvis AP protocol. Conclusion: Manufacturer’s default IGRT protocols could be optimised to reduce the ESD to the patient without losing the necessary image quality for patient set-up correction. For patient set-up with planar kV imaging the bony anatomy is mostly used and optimization should focus on this aspect. Therefore, the current approach with anthropomorphic phantom is more advantageous in optimization over standard kV quality control phantoms and SNR metrics.« less

  3. Are the virial masses of clusters smaller than we think?

    NASA Technical Reports Server (NTRS)

    Cowie, L. L.; Henriksen, M.; Mushotzky, R.

    1987-01-01

    The constraints that the available X-ray spectral and imaging data place on the mass distribution and mass to light ratio of rich clusters are considered. It was found for the best determined cases that the mass to light ratio is less than 125 h sub 50 at radii exceeding 1 h sub 50 Mpc. The mass to light ratio is approximately constant at radii exceeding 1 h sub 50 Mpc but may rise to values of roughly 200 h sub 50 in the central regions. The fraction of the total mass that is in baryons, primarily the hot X-ray emitting gas, is roughly 30 percent thus setting the mass to light ratio of the dark material to roughly 70. The model that fits the X-ray data for Coma is in good agreement with the observed optical velocity dispersion vs. radius data.

  4. Are the virial masses of clusters smaller than we think?

    NASA Technical Reports Server (NTRS)

    Cowie, L. L.; Henriksen, M.; Mushotzky, R.

    1986-01-01

    The constraints that the available X-ray spectral and imaging data place on the mass distribution and mass to light ratio of rich clusters are considered. It was found for the best determined cases that the mass to light ratio is less than 125 h sub 50 at radii exceeding 1 h sub 50 Mpc. The mass to light ratio is approximately constant at radii exceeding 1 h sub 50 Mpc but may rise to values of roughly 200 h sub 50 in the central regions. The fraction of the total mass that is in baryons, primarily the hot X-ray emitting gas, is roughly 30% thus setting the mass to light ratio of the dark material to roughly 70. The model that fits the X-ray data for Coma is in good agreement with the observed optical velocity dispersion vs. radius data.

  5. Assessing artificial neural networks and statistical methods for infilling missing soil moisture records

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.; Chik, Li

    2014-07-01

    Soil moisture information is critically important for water management operations including flood forecasting, drought monitoring, and groundwater recharge estimation. While an accurate and continuous record of soil moisture is required for these applications, the available soil moisture data, in practice, is typically fraught with missing values. There are a wide range of methods available to infilling hydrologic variables, but a thorough inter-comparison between statistical methods and artificial neural networks has not been made. This study examines 5 statistical methods including monthly averages, weighted Pearson correlation coefficient, a method based on temporal stability of soil moisture, and a weighted merging of the three methods, together with a method based on the concept of rough sets. Additionally, 9 artificial neural networks are examined, broadly categorized into feedforward, dynamic, and radial basis networks. These 14 infilling methods were used to estimate missing soil moisture records and subsequently validated against known values for 13 soil moisture monitoring stations for three different soil layer depths in the Yanco region in southeast Australia. The evaluation results show that the top three highest performing methods are the nonlinear autoregressive neural network, rough sets method, and monthly replacement. A high estimation accuracy (root mean square error (RMSE) of about 0.03 m/m) was found in the nonlinear autoregressive network, due to its regression based dynamic network which allows feedback connections through discrete-time estimation. An equally high accuracy (0.05 m/m RMSE) in the rough sets procedure illustrates the important role of temporal persistence of soil moisture, with the capability to account for different soil moisture conditions.

  6. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  7. Active optics: off axis aspherics generation for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Hugot, E.; Laslandes, M.; Ferrari, M.; Vives, S.; Moindrot, S.; El Hadi, K.; Dohlen, K.

    2017-11-01

    Active Optics methods, based on elasticity theory, allow the aspherisation of optical surfaces by stress polishing but also active aspherisation in situ. Researches in this field will impact the final performance and the final cost of any telescope or instrument. The stress polishing method is well suited for the superpolishing of aspheric components for astronomy. Its principle relies on spherical polishing with a full-sized tool of a warped substrate, which becomes aspherical once unwarped. The main advantage of this technique is the very high optical quality obtained either on form or on high spatial frequency errors. Furthermore, the roughness can be decreased down to a few angstroms, thanks the classical polishing with a large pitch tool, providing a substantial gain on the final scientific performance, for instance on the contrast on coronagraphic images, but also on the polishing time and cost. Stress polishing is based on elasticity theory, and requires an optimised deformation system able to provide the right aspherical form on the optical surface during polishing. The optical quality of the deformation is validated using extensive Finite Element Analysis, allowing an estimation of residuals and an optimisation of the warping harness. We describe here the work realised on stress polishing of toric mirrors for VLT-SPHERE and then our actual work on off axis aspherics (OAA) for the ASPIICS-Proba3 mission for solar coronagraphy. The ASPIICS optical design made by Vives et al is a three mirrors anastigmat including a concave off axis hyperboloid and a convex off axis parabola (OAP). We are developing a prototype in order to demonstrate the feasibility of this type of surface, using a multi-mode warping harness (Lemaitre et al). Furthermore, we present our work on variable OAP, meaning the possibility to adjust the shape of a simple OAP in situ with a minimal number of actuators, typically one actuator per optical mode (Focus, Coma and Astigmatism). Applications for future space telescopes and instrumentation are discussed.

  8. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  9. Improving target coverage and organ-at-risk sparing in intensity-modulated radiotherapy for cervical oesophageal cancer using a simple optimisation method.

    PubMed

    Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi

    2015-01-01

    To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.

  10. A waste characterisation procedure for ADM1 implementation based on degradation kinetics.

    PubMed

    Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F

    2012-09-01

    In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. [Strategy and collaboration between medicinal chemists and pharmaceutical scientists for drug delivery systems].

    PubMed

    Mano, Takashi

    2013-01-01

    In order to successfully apply drug delivery systems (DDS) to new chemical entities (NCEs), collaboration between medicinal chemists and formulation scientists is critical for efficient drug discovery. Formulation scientists have to use 'language' that medicinal chemists understand to help promote mutual understanding, and medicinal chemists and formulation scientists have to set up strategies to use suitable DDS technologies at the discovery phase of the programmes to ensure successful transfer into the development phase. In this review, strategies of solubilisation formulation for oral delivery, inhalation delivery, nasal delivery and bioconjugation are all discussed. For example, for oral drug delivery, multiple initiatives can be proposed to improve the process to select an optimal delivery option for an NCE. From a technical perspective, formulation scientists have to explain the scope and limitations of formulations as some DDS technologies might be applicable only to limited chemical spaces. Other limitations could be the administered dose and, cost, time and resources for formulation development and manufacturing. Since DDS selection is best placed as part of lead-optimisation, formulation scientists need to be involved in discovery projects at lead selection and optimisation stages. The key to success in their collaboration is to facilitate communication between these two areas of expertise at both a strategic and scientific level. Also, it would be beneficial for medicinal chemists and formulation scientists to set common goals to improve the process of collaboration and build long term partnerships to improve DDS.

  12. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    PubMed

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  13. Comprehensive quantum chemical and spectroscopic (FTIR, FT-Raman, 1H, 13C NMR) investigations of O-desmethyltramadol hydrochloride an active metabolite in tramadol--an analgesic drug.

    PubMed

    Arjunan, V; Santhanam, R; Marchewka, M K; Mohan, S

    2014-03-25

    O-desmethyltramadol is one of the main metabolites of tramadol widely used clinically and has analgesic activity. The FTIR and FT-Raman spectra of O-desmethyl tramadol hydrochloride are recorded in the solid phase in the regions 4000-400 cm(-1) and 4000-100 cm(-1), respectively. The observed fundamentals are assigned to different normal modes of vibration. Theoretical studies have been performed as its hydrochloride salt. The structure of the compound has been optimised with B3LYP method using 6-31G(**) and cc-pVDZ basis sets. The optimised bond length and bond angles are correlated with the X-ray data. The experimental wavenumbers were compared with the scaled vibrational frequencies determined by DFT methods. The IR and Raman intensities are determined with B3LYP method using cc-pVDZ and 6-31G(d,p) basic sets. The total electron density and molecular electrostatic potential surfaces of the molecule are constructed by using B3LYP/cc-pVDZ method to display electrostatic potential (electron+nuclei) distribution. The electronic properties HOMO and LUMO energies were measured. Natural bond orbital analysis of O-desmethyltramadol hydrochloride has been performed to indicate the presence of intramolecular charge transfer. The (1)H and (13)C NMR chemical shifts of the molecule have been anlysed. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Machining of bone: Analysis of cutting force and surface roughness by turning process.

    PubMed

    Noordin, M Y; Jiawkok, N; Ndaruhadi, P Y M W; Kurniawan, D

    2015-11-01

    There are millions of orthopedic surgeries and dental implantation procedures performed every year globally. Most of them involve machining of bones and cartilage. However, theoretical and analytical study on bone machining is lagging behind its practice and implementation. This study views bone machining as a machining process with bovine bone as the workpiece material. Turning process which makes the basis of the actually used drilling process was experimented. The focus is on evaluating the effects of three machining parameters, that is, cutting speed, feed, and depth of cut, to machining responses, that is, cutting forces and surface roughness resulted by the turning process. Response surface methodology was used to quantify the relation between the machining parameters and the machining responses. The turning process was done at various cutting speeds (29-156 m/min), depths of cut (0.03 -0.37 mm), and feeds (0.023-0.11 mm/rev). Empirical models of the resulted cutting force and surface roughness as the functions of cutting speed, depth of cut, and feed were developed. Observation using the developed empirical models found that within the range of machining parameters evaluated, the most influential machining parameter to the cutting force is depth of cut, followed by feed and cutting speed. The lowest cutting force was obtained at the lowest cutting speed, lowest depth of cut, and highest feed setting. For surface roughness, feed is the most significant machining condition, followed by cutting speed, and with depth of cut showed no effect. The finest surface finish was obtained at the lowest cutting speed and feed setting. © IMechE 2015.

  15. Remote sensing of soil moisture content over bare fields at 1.4 GHz frequency

    NASA Technical Reports Server (NTRS)

    Wang, J. R.; Choudhury, B. J.

    1980-01-01

    A simple method of estimating moisture content (W) of a bare soil from the observed brightness temperature (T sub B) at 1.4 GHz is discussed. The method is based on a radiative transfer model calculation, which has been successfully used in the past to account for many observational results, with some modifications to take into account the effect of surface roughness. Besides the measured T sub B's, the three additional inputs required by the method are the effective soil thermodynamic temperature, the precise relation between W and the smooth field brightness temperature T sub B and a parameter specifying the surface roughness characteristics. The soil effective temperature can be readily measured and the procedures of estimating surface roughness parameter and obtaining the relation between W and smooth field brightness temperature are discussed in detail. Dual polarized radiometric measurements at an off-nadir incident angle are sufficient to estimate both surface roughness parameter and W, provided that the relation between W and smooth field brightness temperature at the same angle is known. The method of W estimate is demonstrated with two sets of experimental data, one from a controlled field experiment by a mobile tower and the other, from aircraft overflight. The results from both data sets are encouraging when the estimated W's are compared with the acquired ground truth of W's in the top 2 cm layer. An offset between the estimated and the measured W's exists in the results of the analyses, but that can be accounted for by the presently poor knowledge of the relationship between W and smooth field brightness temperature for various types of soils. An approach to quantify this relationship for different soils and thus improve the method of W estimate is suggested.

  16. Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project

    DTIC Science & Technology

    2005-05-01

    thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of

  17. Soil roughness, slope and surface storage relationship for impervious areas

    NASA Astrophysics Data System (ADS)

    Borselli, Lorenzo; Torri, Dino

    2010-11-01

    SummaryThe study of the relationships between surface roughness, local slope gradient and maximum volume of water storage in surface depressions is a fundamental element in the development of hydrological models to be used in soil and water conservation strategies. Good estimates of the maximum volume of water storage are important for runoff assessment during rainfall events. Some attempts to link surface storage to parameters such as indices of surface roughness and, more rarely, local gradient have been proposed by several authors with empirical equations often conflicting between them and usually based on a narrow range of slope gradients. This suggests care in selecting any of the proposed equations or models and invites one to verify the existence of more realistic experimental relationships, based on physical models of the surfaces and valid for a larger range of gradients. The aim of this study is to develop such a relation for predicting/estimating the maximum volume of water that a soil surface, with given roughness characteristics and local slope gradient, can store. Experimental work has been carried out in order to reproduce reliable rough surfaces able to maintain the following properties during the experimental activity: (a) impervious surface to avoid biased storage determination; (b) stable, un-erodible surfaces to avoid changes of retention volume during tests; (c) absence of hydrophobic behaviour. To meet the conditions a-c we generate physical surfaces with various roughness magnitude using plasticine (emulsion of non-expansible clay and oil). The plasticine surface, reproducing surfaces of arable soils, was then wetted and dirtied with a very fine timber sawdust. This reduced the natural hydrophobic behaviour of the plasticine to an undetectable value. Storage experiments were conducted with plasticine rough surfaces on top of large rigid polystyrene plates inclined at different slope gradient: 2%, 5%, 10%, 20%, 30%. Roughness data collected on the generated plasticine surfaces were successfully compared with roughness data collected on real soil surfaces for similar conditions. A set of roughness indices was computed for each surface using roughness profiles measured with a laser profile meter. Roughness indices included quantiles of the Abbot-Firestone curve, which is used in surface metrology for industrial application to characterize surface roughness in a non-parametric approach ( Whitehouse, 1994). Storage data were fitted with an empirical equation (double negative exponential of roughness and slope). Several roughness indices resulted well related to storage. The better results were obtained using the Abbot-Firestone curve parameter P100. Beside this storage empirical model (SEM) a geometrical model was also developed, trying to give a more physical basis to the result obtained so far. Depression geometry was approximated with spherical cups. A general physical model was derived (storage cup model - SCM). The cup approximation identifies where roughness elevation comes in and how it relates to slope gradient in defining depression volume. Moreover, the exponential decay used for assessing slope effect on storage volume in the empirical model of Eqs. (8) and (9) emerges as consistent with distribution of cup sizes.

  18. Dark Energy Survey Year 1 Results: galaxy mock catalogues for BAO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, S.; et al.

    Mock catalogues are a crucial tool in the analysis of galaxy surveys data, both for the accurate computation of covariance matrices, and for the optimisation of analysis methodology and validation of data sets. In this paper, we present a set of 1800 galaxy mock catalogues designed to match the Dark Energy Survey Year-1 BAO sample (Crocce et al. 2017) in abundance, observational volume, redshift distribution and uncertainty, and redshift dependent clustering. The simulated samples were built upon HALOGEN (Avila et al. 2015) halo catalogues, based on a $2LPT$ density field with an exponential bias. For each of them, a lightconemore » is constructed by the superposition of snapshots in the redshift range $0.45« less

  19. Thermal buckling optimisation of composite plates using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Kamarian, S.; Shakeri, M.; Yas, M. H.

    2017-07-01

    Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.

  20. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  1. Electric cars. Advantages and disadvantages

    NASA Astrophysics Data System (ADS)

    Gelmanova, Z. S.; Zhabalova, G. G.; Sivyakova, G. A.; Lelikova, O. N.; Onishchenko, O. N.; Smailova, A. A.; Kamarova, S. N.

    2018-05-01

    The article considers the positive and negative aspects of the use of electric vehicles. A rough calculation of the energy efficiency and average cost per month was made. Also priorities to avoid the existing problems in the market of electric vehicles were set.

  2. Aerodynamics and thermal physics of helicopter ice accretion

    NASA Astrophysics Data System (ADS)

    Han, Yiqiang

    Ice accretion on aircraft introduces significant loss in airfoil performance. Reduced lift-to- drag ratio reduces the vehicle capability to maintain altitude and also limits its maneuverability. Current ice accretion performance degradation modeling approaches are calibrated only to a limited envelope of liquid water content, impact velocity, temperature, and water droplet size; consequently inaccurate aerodynamic performance degradations are estimated. The reduced ice accretion prediction capabilities in the glaze ice regime are primarily due to a lack of knowledge of surface roughness induced by ice accretion. A comprehensive understanding of the ice roughness effects on airfoil heat transfer, ice accretion shapes, and ultimately aerodynamics performance is critical for the design of ice protection systems. Surface roughness effects on both heat transfer and aerodynamic performance degradation on airfoils have been experimentally evaluated. Novel techniques, such as ice molding and casting methods and transient heat transfer measurement using non-intrusive thermal imaging methods, were developed at the Adverse Environment Rotor Test Stand (AERTS) facility at Penn State. A novel heat transfer scaling method specifically for turbulent flow regime was also conceived. A heat transfer scaling parameter, labeled as Coefficient of Stanton and Reynolds Number (CSR = Stx/Rex --0.2), has been validated against reference data found in the literature for rough flat plates with Reynolds number (Re) up to 1x107, for rough cylinders with Re ranging from 3x104 to 4x106, and for turbine blades with Re from 7.5x105 to 7x106. This is the first time that the effect of Reynolds number is shown to be successfully eliminated on heat transfer magnitudes measured on rough surfaces. Analytical models for ice roughness distribution, heat transfer prediction, and aerodynamics performance degradation due to ice accretion have also been developed. The ice roughness prediction model was developed based on a set of 82 experimental measurements and also compared to existing predictions tools. Two reference predictions found in the literature yielded 76% and 54% discrepancy with respect to experimental testing, whereas the proposed ice roughness prediction model resulted in a 31% minimum accuracy in prediction. It must be noted that the accuracy of the proposed model is within the ice shape reproduction uncertainty of icing facilities. Based on the new ice roughness prediction model and the CSR heat transfer scaling method, an icing heat transfer model was developed. The approach achieved high accuracy in heat transfer prediction compared to experiments conducted at the AERTS facility. The discrepancy between predictions and experimental results was within +/-15%, which was within the measurement uncertainty range of the facility. By combining both the ice roughness and heat transfer predictions, and incorporating the modules into an existing ice prediction tool (LEWICE), improved prediction capability was obtained, especially for the glaze regime. With the available ice shapes accreted at the AERTS facility and additional experiments found in the literature, 490 sets of experimental ice shapes and corresponding aerodynamics testing data were available. A physics-based performance degradation empirical tool was developed and achieved a mean absolute deviation of 33% when compared to the entire experimental dataset, whereas 60% to 243% discrepancies were observed using legacy drag penalty prediction tools. Rotor torque predictions coupling Blade Element Momentum Theory and the proposed drag performance degradation tool was conducted on a total of 17 validation cases. The coupled prediction tool achieved a 10% predicting error for clean rotor conditions, and 16% error for iced rotor conditions. It was shown that additional roughness element could affect the measured drag by up to 25% during experimental testing, emphasizing the need of realistic ice structures during aerodynamics modeling and testing for ice accretion.

  3. An Analysis of Quality in the Modular Housing Industry.

    DTIC Science & Technology

    1991-12-01

    finishing, Station 5, installs rough plumbing and applies the first coat of drywall joint compound . The unit continues to ceiling/roof setting, Station...with I joint compound and drywall or plywood plates. 3 14. Rigid waferboard, oriented strand board, or plywood is used for exterior wall sheathing to...completed and tested, the second coat of joint compound is placed, and windows and doors are set. Insulation, exterior sheathing, roof sheathing

  4. Knowledge mining from clinical datasets using rough sets and backpropagation neural network.

    PubMed

    Nahato, Kindie Biredagn; Harichandran, Khanna Nehemiah; Arputharaj, Kannan

    2015-01-01

    The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN) is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI) machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.

  5. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  6. A Bayesian Approach for Sensor Optimisation in Impact Identification

    PubMed Central

    Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.

    2016-01-01

    This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064

  7. Optimisation of active suspension control inputs for improved vehicle handling performance

    NASA Astrophysics Data System (ADS)

    Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor

    2016-11-01

    Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.

  8. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    NASA Astrophysics Data System (ADS)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  9. DryLab® optimised two-dimensional high performance liquid chromatography for differentiation of ephedrine and pseudoephedrine based methamphetamine samples.

    PubMed

    Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A

    2014-11-01

    In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.

  10. Improving controllable adhesion on both rough and smooth surfaces with a hybrid electrostatic/gecko-like adhesive

    PubMed Central

    Ruffatto, Donald; Parness, Aaron; Spenko, Matthew

    2014-01-01

    This paper describes a novel, controllable adhesive that combines the benefits of electrostatic adhesives with gecko-like directional dry adhesives. When working in combination, the two technologies create a positive feedback cycle whose adhesion, depending on the surface type, is often greater than the sum of its parts. The directional dry adhesive brings the electrostatic adhesive closer to the surface, increasing its effect. Similarly, the electrostatic adhesion helps engage more of the directional dry adhesive fibrillar structures, particularly on rough surfaces. This paper presents the new hybrid adhesive's manufacturing process and compares its performance to three other adhesive technologies manufactured using a similar process: reinforced PDMS, electrostatic and directional dry adhesion. Tests were performed on a set of ceramic tiles with varying roughness to quantify its effect on shear adhesive force. The relative effectiveness of the hybrid adhesive increases as the surface roughness is increased. Experimental data are also presented for different substrate materials to demonstrate the enhanced performance achieved with the hybrid adhesive. Results show that the hybrid adhesive provides up to 5.1× greater adhesion than the electrostatic adhesive or directional dry adhesive technologies alone. PMID:24451392

  11. The Improvement of the Closed Bounded Volume (CBV) Evaluation Methods to Compute a Feasible Rough Machining Area Based on Faceted Models

    NASA Astrophysics Data System (ADS)

    Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos

    2017-06-01

    The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.

  12. Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks

    DTIC Science & Technology

    2015-04-01

    UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace

  13. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    NASA Astrophysics Data System (ADS)

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  14. Optimisation techniques in vaginal cuff brachytherapy.

    PubMed

    Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A

    2009-11-01

    The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.

  15. In vitro fertilisation in a small unit in the NHS

    PubMed Central

    Bromwich, Peter; Walker, Andrew; Kennedy, Stephen; Wiley, Mary; Little, David; Ross, Caroline; Sargent, Ian; Bellinger, Joan; O'Reilly, Helen; Lopez-Bernal, Andres; Brice, Amy L; Barlow, David

    1988-01-01

    In vitro fertilisation is one of the most effective new treatments for infertility, but financial restrictions have made it impossible for it to be widely carried out in the National Health Service. We report on the establishment of a small, largely self funded, unit that was set up with the help of the local health service management. All cycles are programmed so that most work is carried out during the working week; oocyte recoveries are performed as outpatient procedures without general anaesthesia and guided by ultrasound. Roughly a tenth of treatment cycles and roughly a fifth of embryo transfers resulted in a clinical pregnancy. PMID:3126964

  16. Effect of preventive (beta blocker) treatment, behavioural migraine management, or their combination on outcomes of optimised acute treatment in frequent migraine: randomised controlled trial.

    PubMed

    Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina

    2010-09-29

    To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.

  17. Comparison of Tropical and Extratropical Gust Factors Using Observed and Simulated Data

    NASA Astrophysics Data System (ADS)

    Edwards, R. P.; Schroeder, J. L.

    2011-12-01

    Questions of whether differences exist between tropical cyclone (TC) and extratropical (ET) wind have been the subject of considerable debate. This study will focus on the behavior of the gust factor (GF), the ratio of a peak wind speed of a certain duration and a mean wind speed of a certain duration, for three types of data: TC, ET, and simulated. For this project, the Universal Spectrum, a normalized, averaged spectrum for wind, was un-normalized and used to create simulated wind speed time series at a variety of wind speeds. Additional time series were created after modifying the spectrum to simulate the additional low-frequency energy observed in the TC wind spectrum as well as the reduction of high-frequency energy caused by a mechanical anemometer. The T and ET data used for this study were collected by Texas Tech University's mobile towers as part of various field efforts since 1998. Before comparisons were made, the database was divided into four roughness regimes based on the roughness length to ensure that differences observed in the turbulence statistics are not caused by differences in upstream terrain. The mean GF for the TC data set (open roughness regime), 1.49, was slightly higher than the ET value of 1.44 (Table 1). The distributions of GFs from each data type show similarities in shape between the base-simulated and ET data sets and between the TC and modified-simulated data set (Figure 1). These similarities are expected given the spectral similarities between the TC and modified-simulated data sets, namely additional low-frequency energy relative to the ET and base-simulated data. These findings suggest that the higher amount of low-frequency energy present in the tropical wind spectrum is partially responsible for the resulting higher GF for the tropical cyclone data. However, the modest increase in GF from the base to the modified simulated data suggest that there are more factors at work.

  18. On-machine precision preparation and dressing of ball-headed diamond wheel for the grinding of fused silica

    NASA Astrophysics Data System (ADS)

    Chen, Mingjun; Li, Ziang; Yu, Bo; Peng, Hui; Fang, Zhen

    2013-09-01

    In the grinding of high quality fused silica parts with complex surface or structure using ball-headed metal bonded diamond wheel with small diameter, the existing dressing methods are not suitable to dress the ball-headed diamond wheel precisely due to that they are either on-line in process dressing which may causes collision problem or without consideration for the effects of the tool setting error and electrode wear. An on-machine precision preparation and dressing method is proposed for ball-headed diamond wheel based on electrical discharge machining. By using this method the cylindrical diamond wheel with small diameter is manufactured to hemispherical-headed form. The obtained ball-headed diamond wheel is dressed after several grinding passes to recover geometrical accuracy and sharpness which is lost due to the wheel wear. A tool setting method based on high precision optical system is presented to reduce the wheel center setting error and dimension error. The effect of electrode tool wear is investigated by electrical dressing experiments, and the electrode tool wear compensation model is established based on the experimental results which show that the value of wear ratio coefficient K' tends to be constant with the increasing of the feed length of electrode and the mean value of K' is 0.156. Grinding experiments of fused silica are carried out on a test bench to evaluate the performance of the preparation and dressing method. The experimental results show that the surface roughness of the finished workpiece is 0.03 μm. The effect of the grinding parameter and dressing frequency on the surface roughness is investigated based on the measurement results of the surface roughness. This research provides an on-machine preparation and dressing method for ball-headed metal bonded diamond wheel used in the grinding of fused silica, which provides a solution to the tool setting method and the effect of electrode tool wear.

  19. The Use of Mathematical Modelling for Improving the Tissue Engineering of Organs and Stem Cell Therapy.

    PubMed

    Lemon, Greg; Sjoqvist, Sebastian; Lim, Mei Ling; Feliu, Neus; Firsova, Alexandra B; Amin, Risul; Gustafsson, Ylva; Stuewer, Annika; Gubareva, Elena; Haag, Johannes; Jungebluth, Philipp; Macchiarini, Paolo

    2016-01-01

    Regenerative medicine is a multidisciplinary field where continued progress relies on the incorporation of a diverse set of technologies from a wide range of disciplines within medicine, science and engineering. This review describes how one such technique, mathematical modelling, can be utilised to improve the tissue engineering of organs and stem cell therapy. Several case studies, taken from research carried out by our group, ACTREM, demonstrate the utility of mechanistic mathematical models to help aid the design and optimisation of protocols in regenerative medicine.

  20. Optimal port-based teleportation

    NASA Astrophysics Data System (ADS)

    Mozrzymas, Marek; Studziński, Michał; Strelchuk, Sergii; Horodecki, Michał

    2018-05-01

    Deterministic port-based teleportation (dPBT) protocol is a scheme where a quantum state is guaranteed to be transferred to another system without unitary correction. We characterise the best achievable performance of the dPBT when both the resource state and the measurement is optimised. Surprisingly, the best possible fidelity for an arbitrary number of ports and dimension of the teleported state is given by the largest eigenvalue of a particular matrix—Teleportation Matrix. It encodes the relationship between a certain set of Young diagrams and emerges as the optimal solution to the relevant semidefinite programme.

  1. COAL UTILITY EVIRONMENTAL COST (CUECOST) WORKBOOK USER'S MANUAL

    EPA Science Inventory

    The document is a user's manual for the Coal Utility Environmental Cost (CUECost) workbook (an interrelated set of spreadsheets) and documents its development and the validity of methods used to estimate installed capital ad annualize costs. The CUECost workbook produces rough-or...

  2. A density based algorithm to detect cavities and holes from planar points

    NASA Astrophysics Data System (ADS)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  3. A summary of measured hydraulic data for the series of steady and unsteady flow experiments over patterned roughness

    USGS Publications Warehouse

    Collins, Dannie L.; Flynn, Kathleen M.

    1979-01-01

    This report summarizes and makes available to other investigators the measured hydraulic data collected during a series of experiments designed to study the effect of patterned bed roughness on steady and unsteady open-channel flow. The patterned effect of the roughness was obtained by clear-cut mowing of designated areas of an otherwise fairly dense coverage of coastal Bermuda grass approximately 250 mm high. All experiments were conducted in the Flood Plain Simulation Facility during the period of October 7 through December 12, 1974. Data from 18 steady flow experiments and 10 unsteady flow experiments are summarized. Measured data included are ground-surface elevations, grass heights and densities, water-surface elevations and point velocities for all experiments. Additional tables of water-surface elevations and measured point velocities are included for the clear-cut areas for most experiments. One complete set of average water-surface elevations and one complete set of measured point velocities are tabulated for each steady flow experiment. Time series data, on a 2-minute time interval, are tabulated for both water-surface elevations and point velocities for each unsteady flow experiment. All data collected, including individual records of water-surface elevations for the steady flow experiments, have been stored on computer disk storage and can be retrieved using the computer programs listed in the attachment to this report. (Kosco-USGS)

  4. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    PubMed

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-07-03

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  5. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    PubMed Central

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-01-01

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies. PMID:26151207

  6. Impact of the ongoing Amazonian deforestation on local precipitation: A GCM simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, G.K.; Sud, Y.C.; Atlas, R.

    1995-03-01

    Numerical simulation experiments were conducted to delineate the influence of in situ deforestation data on episodic rainfall by comparing two ensembles of five 5-day integrations performed with a recent version of the Goddard Laboratory for Atmospheres GCM that has a simple biosphere model (SiB). The first set, called control cases, used the standard SiB vegetation cover (comprising 12 biomes) and assumed a fully forested Amazonia, while the second set, called deforestation cases, distinguished the partially deforested regions of Amazonia as savanna. Except for this difference, all other initial and prescribed boundary conditions were kept identical in both sets of integrations.more » The differential analyses of these five cases show the following local effects of deforestation. (1) A discernible decrease in evapotranspiration of about 0.80 mm d{sup {minus}1} (roughly 18%) that is quite robust in the averages for 1-, 2-, and 5-day forecasts. (2) A decrease in precipitation of about 1.18 mm d{sup {minus}1} (roughly 8%) that begins to emerge even in 1-2-day averages and exhibits complex evolution that extends downstream with the winds. A larger decrease in precipitation as compared to evapotranspiration produces some drying and warming. The precipitation differences are consistent with the decrease in atmospheric moisture flux convergence and are consistent with earlier simulation studies of local climate change due to large-scale deforestation. (3) A significant decrease in the surface drag force (as a consequence of reduced surface roughness of deforested regions) that, in turn, affects the dynamical structure of moisture convergence and circulation. The surface winds increase significantly during the first day, and thereafter the increase is well maintained even in the 2- and 5-day averages. 34 refs., 9 figs., 2 tabs.« less

  7. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    PubMed

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  8. Metric optimisation for analogue forecasting by simulated annealing

    NASA Astrophysics Data System (ADS)

    Bliefernicht, J.; Bárdossy, A.

    2009-04-01

    It is well known that weather patterns tend to recur from time to time. This property of the atmosphere is used by analogue forecasting techniques. They have a long history in weather forecasting and there are many applications predicting hydrological variables at the local scale for different lead times. The basic idea of the technique is to identify past weather situations which are similar (analogue) to the predicted one and to take the local conditions of the analogues as forecast. But the forecast performance of the analogue method depends on user-defined criteria like the choice of the distance function and the size of the predictor domain. In this study we propose a new methodology of optimising both criteria by minimising the forecast error with simulated annealing. The performance of the methodology is demonstrated for the probability forecast of daily areal precipitation. It is compared with a traditional analogue forecasting algorithm, which is used operational as an element of a hydrological forecasting system. The study is performed for several meso-scale catchments located in the Rhine basin in Germany. The methodology is validated by a jack-knife method in a perfect prognosis framework for a period of 48 years (1958-2005). The predictor variables are derived from the NCEP/NCAR reanalysis data set. The Brier skill score and the economic value are determined to evaluate the forecast skill and value of the technique. In this presentation we will present the concept of the optimisation algorithm and the outcome of the comparison. It will be also demonstrated how a decision maker should apply a probability forecast to maximise the economic benefit from it.

  9. Antiretroviral Therapy Optimisation without Genotype Resistance Testing: A Perspective on Treatment History Based Models

    PubMed Central

    Prosperi, Mattia C. F.; Rosen-Zvi, Michal; Altmann, André; Zazzi, Maurizio; Di Giambenedetto, Simona; Kaiser, Rolf; Schülter, Eugen; Struck, Daniel; Sloot, Peter; van de Vijver, David A.; Vandamme, Anne-Mieke; Sönnerborg, Anders

    2010-01-01

    Background Although genotypic resistance testing (GRT) is recommended to guide combination antiretroviral therapy (cART), funding and/or facilities to perform GRT may not be available in low to middle income countries. Since treatment history (TH) impacts response to subsequent therapy, we investigated a set of statistical learning models to optimise cART in the absence of GRT information. Methods and Findings The EuResist database was used to extract 8-week and 24-week treatment change episodes (TCE) with GRT and additional clinical, demographic and TH information. Random Forest (RF) classification was used to predict 8- and 24-week success, defined as undetectable HIV-1 RNA, comparing nested models including (i) GRT+TH and (ii) TH without GRT, using multiple cross-validation and area under the receiver operating characteristic curve (AUC). Virological success was achieved in 68.2% and 68.0% of TCE at 8- and 24-weeks (n = 2,831 and 2,579), respectively. RF (i) and (ii) showed comparable performances, with an average (st.dev.) AUC 0.77 (0.031) vs. 0.757 (0.035) at 8-weeks, 0.834 (0.027) vs. 0.821 (0.025) at 24-weeks. Sensitivity analyses, carried out on a data subset that included antiretroviral regimens commonly used in low to middle income countries, confirmed our findings. Training on subtype B and validation on non-B isolates resulted in a decline of performance for models (i) and (ii). Conclusions Treatment history-based RF prediction models are comparable to GRT-based for classification of virological outcome. These results may be relevant for therapy optimisation in areas where availability of GRT is limited. Further investigations are required in order to account for different demographics, subtypes and different therapy switching strategies. PMID:21060792

  10. Maximum volume cuboids for arbitrarily shaped in-situ rock blocks as determined by discontinuity analysis—A genetic algorithm approach

    NASA Astrophysics Data System (ADS)

    Ülker, Erkan; Turanboy, Alparslan

    2009-07-01

    The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).

  11. Synthetically modified nano-cellulose for the removal of chromium: a green nanotech perspective.

    PubMed

    Jain, Priyanka; Varshney, Shilpa; Srivastava, Shalini

    2017-02-01

    Existing processes for the decontamination of heavy metals from water are found to be cost-prohibitive and energy-intensive which is totally against the sustainable concept of development. Green nanotechnology for water purification for ecosystem management, agricultural and industry is an emerging as leading global priority and occupies better position over the current state of water purification. Herein, the diafunctionalised polyaniline modified nanocellulose composite sorbent (PANI-NCC) has been used to introduce amine and imine functionalities for the removal of trivalent and hexavalent chromium from water bodies. The fabricated nanobiomaterial has been authenticated by modern spectroscopic, microscopic techniques. The modified PANI-NCC is rod-like in shape, ~60 nm in size. The roughness and crystallinity index is also quantified and found to be 49.67 nm and 84.18%, respectively. The optimised experimental finding provides the efficient removal of trivalent [Cr(III)] (47.06 mg/g; 94.12%) and hexavalent [Cr(VI)] (48.92 mg/g; 97.84%) chromium from synthetic waste water. The fabricated nano biosorbent is deemed to be a potent biosorbent for technological development to remove the toxic metals in the real environmental water samples.

  12. Effect of ion-implantation on surface characteristics of nickel titanium and titanium molybdenum alloy arch wires.

    PubMed

    Krishnan, Manu; Saraswathy, Seema; Sukumaran, Kalathil; Abraham, Kurian Mathew

    2013-01-01

    To evaluate the changes in surface roughness and frictional features of 'ion-implanted nickel titanium (NiTi) and titanium molybdenum alloy (TMA) arch wires' from its conventional types in an in-vitro laboratory set up. 'Ion-implanted NiTi and low friction TMA arch wires' were assessed for surface roughness with scanning electron microscopy (SEM) and 3 dimensional (3D) optical profilometry. Frictional forces were studied in a universal testing machine. Surface roughness of arch wires were determined as Root Mean Square (RMS) values in nanometers and Frictional Forces (FF) in grams. Mean values of RMS and FF were compared by Student's 't' test and one way analysis of variance (ANOVA). SEM images showed a smooth topography for ion-implanted versions. 3D optical profilometry demonstrated reduction of RMS values by 58.43% for ion-implanted NiTi (795.95 to 330.87 nm) and 48.90% for TMA groups (463.28 to 236.35 nm) from controls. Nonetheless, the corresponding decrease in FF was only 29.18% for NiTi and 22.04% for TMA, suggesting partial correction of surface roughness and disproportionate reduction in frictional forces with ion-implantation. Though the reductions were highly significant at P < 0.001, relations between surface roughness and frictional forces remained non conclusive even after ion-implantation. The study proved that ion-implantation can significantly reduce the surface roughness of NiTi and TMA wires but could not make a similar reduction in frictional forces. This can be attributed to the inherent differences in stiffness and surface reactivity of NiTi and TMA wires when used in combination with stainless steel brackets, which needs further investigations.

  13. Surface changes of metal alloys and high-strength ceramics after ultrasonic scaling and intraoral polishing.

    PubMed

    Yoon, Hyung-In; Noh, Hyo-Mi; Park, Eun-Jin

    2017-06-01

    This study was to evaluate the effect of repeated ultrasonic scaling and surface polishing with intraoral polishing kits on the surface roughness of three different restorative materials. A total of 15 identical discs were fabricated with three different materials. The ultrasonic scaling was conducted for 20 seconds on the test surfaces. Subsequently, a multi-step polishing with recommended intraoral polishing kit was performed for 30 seconds. The 3D profiler and scanning electron microscopy were used to investigate surface integrity before scaling (pristine), after scaling, and after surface polishing for each material. Non-parametric Friedman and Wilcoxon signed rank sum tests were employed to statistically evaluate surface roughness changes of the pristine, scaled, and polished specimens. The level of significance was set at 0.05. Surface roughness values before scaling (pristine), after scaling, and polishing of the metal alloys were 3.02±0.34 µm, 2.44±0.72 µm, and 3.49±0.72 µm, respectively. Surface roughness of lithium disilicate increased from 2.35±1.05 µm (pristine) to 28.54±9.64 µm (scaling), and further increased after polishing (56.66±9.12 µm, P <.05). The zirconia showed the most increase in roughness after scaling (from 1.65±0.42 µm to 101.37±18.75 µm), while its surface roughness decreased after polishing (29.57±18.86 µm, P <.05). Ultrasonic scaling significantly changed the surface integrities of lithium disilicate and zirconia. Surface polishing with multi-step intraoral kit after repeated scaling was only effective for the zirconia, while it was not for lithium disilicate.

  14. Influence of oscillating and rotary cutting instruments with electric and turbine handpieces on tooth preparation surfaces.

    PubMed

    Geminiani, Alessandro; Abdel-Azim, Tamer; Ercoli, Carlo; Feng, Changyong; Meirelles, Luiz; Massironi, Domenico

    2014-07-01

    Rotary and nonrotary cutting instruments are used to produce specific characteristics on the axial and marginal surfaces of teeth being prepared for fixed restorations. Oscillating instruments have been suggested for tooth preparation, but no comparative surface roughness data are available. To compare the surface roughness of simulated tooth preparations produced by oscillating instruments versus rotary cutting instruments with turbine and electric handpieces. Different grit rotary cutting instruments were used to prepare Macor specimens (n=36) with 2 handpieces. The surface roughness obtained with rotary cutting instruments was compared with that produced by oscillating cutting instruments. The instruments used were as follows: coarse, then fine-grit rotary cutting instruments with a turbine (group CFT) or an electric handpiece (group CFE); coarse, then medium-grit rotary cutting instruments with a turbine (group CMT) or an electric handpiece (group CME); coarse-grit rotary cutting instruments with a turbine handpiece and oscillating instruments at a low-power (group CSL) or high-power setting (group CSH). A custom testing apparatus was used to test all instruments. The average roughness was measured for each specimen with a 3-dimensional optical surface profiler and compared with 1-way ANOVA and the Tukey honestly significant difference post hoc test for multiple comparisons (α=.05). Oscillating cutting instruments produced surface roughness values similar to those produced by similar grit rotary cutting instruments with a turbine handpiece. The electric handpiece produced smoother surfaces than the turbine regardless of rotary cutting instrument grit. Rotary cutting instruments with electric handpieces produced the smoothest surface, whereas the same instruments used with a turbine and oscillating instruments achieved similar surface roughness. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  15. Surface changes of metal alloys and high-strength ceramics after ultrasonic scaling and intraoral polishing

    PubMed Central

    Noh, Hyo-Mi

    2017-01-01

    PURPOSE This study was to evaluate the effect of repeated ultrasonic scaling and surface polishing with intraoral polishing kits on the surface roughness of three different restorative materials. MATERIALS AND METHODS A total of 15 identical discs were fabricated with three different materials. The ultrasonic scaling was conducted for 20 seconds on the test surfaces. Subsequently, a multi-step polishing with recommended intraoral polishing kit was performed for 30 seconds. The 3D profiler and scanning electron microscopy were used to investigate surface integrity before scaling (pristine), after scaling, and after surface polishing for each material. Non-parametric Friedman and Wilcoxon signed rank sum tests were employed to statistically evaluate surface roughness changes of the pristine, scaled, and polished specimens. The level of significance was set at 0.05. RESULTS Surface roughness values before scaling (pristine), after scaling, and polishing of the metal alloys were 3.02±0.34 µm, 2.44±0.72 µm, and 3.49±0.72 µm, respectively. Surface roughness of lithium disilicate increased from 2.35±1.05 µm (pristine) to 28.54±9.64 µm (scaling), and further increased after polishing (56.66±9.12 µm, P<.05). The zirconia showed the most increase in roughness after scaling (from 1.65±0.42 µm to 101.37±18.75 µm), while its surface roughness decreased after polishing (29.57±18.86 µm, P<.05). CONCLUSION Ultrasonic scaling significantly changed the surface integrities of lithium disilicate and zirconia. Surface polishing with multi-step intraoral kit after repeated scaling was only effective for the zirconia, while it was not for lithium disilicate. PMID:28680550

  16. Potential impacts of robust surface roughness indexes on DTM-based segmentation

    NASA Astrophysics Data System (ADS)

    Trevisani, Sebastiano; Rocca, Michele

    2017-04-01

    In this study, we explore the impact of robust surface texture indexes based on MAD (median absolute differences), implemented by Trevisani and Rocca (2015), in the unsupervised morphological segmentation of an alpine basin. The area was already object of a geomorphometric analysis, consisting in the roughness-based segmentation of the landscape (Trevisani et al. 2012); the roughness indexes were calculated on a high resolution DTM derived by means of airborne Lidar using the variogram as estimator. The calculated roughness indexes have been then used for the fuzzy clustering (Odeh et al., 1992; Burrough et al., 2000) of the basin, revealing the high informative geomorphometric content of the roughness-based indexes. However, the fuzzy clustering revealed a high fuzziness and a high degree of mixing between textural classes; this was ascribed both to the morphological complexity of the basin and to the high sensitivity of variogram to non-stationarity and signal-noise. Accordingly, we explore how the new implemented roughness indexes based on MAD affect the morphological segmentation of the studied basin. References Burrough, P.A., Van Gaans, P.F.M., MacMillan, R.A., 2000. High-resolution landform classification using fuzzy k-means. Fuzzy Sets and Systems 113, 37-52. Odeh, I.O.A., McBratney, A.B., Chittleborough, D.J., 1992. Soil pattern recognition with fuzzy-c-means: application to classification and soil-landform interrelationships. Soil Sciences Society of America Journal 56, 505-516. Trevisani, S., Cavalli, M. & Marchi, L. 2012, "Surface texture analysis of a high-resolution DTM: Interpreting an alpine basin", Geomorphology, vol. 161-162, pp. 26-39. Trevisani, S. & Rocca, M. 2015, "MAD: Robust image texture analysis for applications in high resolution geomorphometry", Computers and Geosciences, vol. 81, pp. 78-92.

  17. Swept Mechanism of Micro-Milling Tool Geometry Effect on Machined Oxygen Free High Conductivity Copper (OFHC) Surface Roughness

    PubMed Central

    Shi, Zhenyu; Liu, Zhanqiang; Li, Yuchao; Qiao, Yang

    2017-01-01

    Cutting tool geometry should be very much considered in micro-cutting because it has a significant effect on the topography and accuracy of the machined surface, particularly considering the uncut chip thickness is comparable to the cutting edge radius. The objective of this paper was to clarify the influence of the mechanism of the cutting tool geometry on the surface topography in the micro-milling process. Four different cutting tools including two two-fluted end milling tools with different helix angles of 15° and 30° cutting tools, as well as two three-fluted end milling tools with different helix angles of 15° and 30° were investigated by combining theoretical modeling analysis with experimental research. The tool geometry was mathematically modeled through coordinate translation and transformation to make all three cutting edges at the cutting tool tip into the same coordinate system. Swept mechanisms, minimum uncut chip thickness, and cutting tool run-out were considered on modeling surface roughness parameters (the height of surface roughness Rz and average surface roughness Ra) based on the established mathematical model. A set of cutting experiments was carried out using four different shaped cutting tools. It was found that the sweeping volume of the cutting tool increases with the decrease of both the cutting tool helix angle and the flute number. Great coarse machined surface roughness and more non-uniform surface topography are generated when the sweeping volume increases. The outcome of this research should bring about new methodologies for micro-end milling tool design and manufacturing. The machined surface roughness can be improved by appropriately selecting the tool geometrical parameters. PMID:28772479

  18. Genetic programming approach to evaluate complexity of texture images

    NASA Astrophysics Data System (ADS)

    Ciocca, Gianluigi; Corchs, Silvia; Gasparini, Francesca

    2016-11-01

    We adopt genetic programming (GP) to define a measure that can predict complexity perception of texture images. We perform psychophysical experiments on three different datasets to collect data on the perceived complexity. The subjective data are used for training, validation, and test of the proposed measure. These data are also used to evaluate several possible candidate measures of texture complexity related to both low level and high level image features. We select four of them (namely roughness, number of regions, chroma variance, and memorability) to be combined in a GP framework. This approach allows a nonlinear combination of the measures and could give hints on how the related image features interact in complexity perception. The proposed complexity measure M exhibits Pearson correlation coefficients of 0.890 on the training set, 0.728 on the validation set, and 0.724 on the test set. M outperforms each of all the single measures considered. From the statistical analysis of different GP candidate solutions, we found that the roughness measure evaluated on the gray level image is the most dominant one, followed by the memorability, the number of regions, and finally the chroma variance.

  19. Effect of Finishing and Polishing on Roughness and Gloss of Lithium Disilicate and Lithium Silicate Zirconia Reinforced Glass Ceramic for CAD/CAM Systems.

    PubMed

    Vichi, A; Fonzar, R Fabian; Goracci, C; Carrabba, M; Ferrari, M

    To assess the efficacy of dedicated finishing/polishing systems on roughness and gloss of VITA Suprinity and IPS e.max CAD. A total of 24 blocks of Suprinity and 24 of e.max were cut into a wedge shape using an InLab MC-XL milling unit. After crystallization, the 24 Suprinity wedges were divided into four subgroups: group A.1: Suprinity Polishing Set Clinical used for 30 seconds and group A.2: for 60 seconds; group A.3: VITA Akzent Plus Paste; and group A.4: spray. The 24 e.max wedges (group B) were divided into four subgroups according to the finishing procedure: group B.1: Optrafine Ceramic Polishing System for 30 seconds and group B.2: for 60 seconds; group B.3: IPS e.max CAD Crystall/Glaze paste; and group B.4: spray. After finishing/polishing, gloss was assessed with a glossmeter and roughness evaluated with a profilometer. Results were analyzed by applying a two-way analysis of variance for gloss and another for roughness (α=0.05). One specimen per each subgroup was observed with a scanning electron microscope. For roughness, materials and surface were significant factors ( p<0.001). Suprinity exhibited significantly lower roughness than e.max. Also the Material-Surface Treatment interaction was statistically significant ( p=0.026). For gloss, both material and surface treatment were significant factors ( p<0.001). VITA Suprinity showed significantly higher gloss than e.max. Also the Material-Surface Treatment interaction was statistically significant ( p<0.001). Manual finishing/polishing for 60 seconds and glazing paste are the most effective procedures in lowering the roughness of CAD/CAM silica-based glass ceramics. Manual finishing/polishing for 60 seconds allows milled silica-based glass ceramics to yield a higher gloss. VITA Suprinity displayed higher polishability than IPS e.max CAD.

  20. Systemic solutions for multi-benefit water and environmental management.

    PubMed

    Everard, Mark; McInnes, Robert

    2013-09-01

    The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    NASA Astrophysics Data System (ADS)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  2. Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.

    PubMed

    Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne

    2017-01-01

    Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.

  3. Feature Selection and Effective Classifiers.

    ERIC Educational Resources Information Center

    Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri

    1998-01-01

    Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…

  4. Design of cognitive engine for cognitive radio based on the rough sets and radial basis function neural network

    NASA Astrophysics Data System (ADS)

    Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli

    2013-03-01

    Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.

  5. High-frequency Born synthetic seismograms based on coupled normal modes

    USGS Publications Warehouse

    Pollitz, F.

    2011-01-01

    High-frequency and full waveform synthetic seismograms on a 3-D laterally heterogeneous earth model are simulated using the theory of coupled normal modes. The set of coupled integral equations that describe the 3-D response are simplified into a set of uncoupled integral equations by using the Born approximation to calculate scattered wavefields and the pure-path approximation to modulate the phase of incident and scattered wavefields. This depends upon a decomposition of the aspherical structure into smooth and rough components. The uncoupled integral equations are discretized and solved in the frequency domain, and time domain results are obtained by inverse Fourier transform. Examples show the utility of the normal mode approach to synthesize the seismic wavefields resulting from interaction with a combination of rough and smooth structural heterogeneities. This approach is applied to an ~4 Hz shallow crustal wave propagation around the site of the San Andreas Fault Observatory at Depth (SAFOD). ?? The Author Geophysical Journal International ?? 2011 RAS.

  6. A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.

    PubMed

    Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian

    2017-01-01

    Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.

  7. A new web-based modelling tool (Websim-MILQ) aimed at optimisation of thermal treatments in the dairy industry.

    PubMed

    Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P

    2008-11-30

    In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.

  8. A brief understanding of process optimisation in microwave-assisted extraction of botanical materials: options and opportunities with chemometric tools.

    PubMed

    Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C

    2014-01-01

    Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Mechanisms of wave‐driven water level variability on reef‐fringed coastlines

    USGS Publications Warehouse

    Buckley, Mark L.; Lowe, Ryan J.; Hansen, Jeff E; van Dongeren, Ap R.; Storlazzi, Curt

    2018-01-01

    Wave‐driven water level variability (and runup at the shoreline) is a significant cause of coastal flooding induced by storms. Wave runup is challenging to predict, particularly along tropical coral reef‐fringed coastlines due to the steep bathymetric profiles and large bottom roughness generated by reef organisms, which can violate assumptions in conventional models applied to open sandy coastlines. To investigate the mechanisms of wave‐driven water level variability on a reef‐fringed coastline, we performed a set of laboratory flume experiments on an along‐shore uniform bathymetric profile with and without bottom roughness. Wave setup and waves at frequencies lower than the incident sea‐swell forcing (infragravity waves) were found to be the dominant components of runup. These infragravity waves were positively correlated with offshore wave groups, signifying they were generated in the surf zone by the oscillation of the breakpoint. On the reef flat and at the shoreline, the low‐frequency waves formed a standing wave pattern with energy concentrated at the natural frequencies of the reef flat, indicating resonant amplification. Roughness elements used in the flume to mimic large reef bottom roughness reduced low frequency motions on the reef flat and reduced wave run up by 30% on average, compared to the runs over a smooth bed. These results provide insight into sea‐swell and infragravity wave transformation and wave setup dynamics on steep‐sloped coastlines, and the effect that future losses of reef bottom roughness may have on coastal flooding along reef‐fringed coasts.

  10. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  11. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    NASA Astrophysics Data System (ADS)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  12. L Band Brightness Temperature Observations over a Corn Canopy during the Entire Growth Cycle

    PubMed Central

    Joseph, Alicia T.; van der Velde, Rogier; O’Neill, Peggy E.; Choudhury, Bhaskar J.; Lang, Roger H.; Kim, Edward J.; Gish, Timothy

    2010-01-01

    During a field campaign covering the 2002 corn growing season, a dual polarized tower mounted L-band (1.4 GHz) radiometer (LRAD) provided brightness temperature (TB) measurements at preset intervals, incidence and azimuth angles. These radiometer measurements were supported by an extensive characterization of land surface variables including soil moisture, soil temperature, vegetation biomass, and surface roughness. In the period May 22 to August 30, ten days of radiometer and ground measurements are available for a corn canopy with a vegetation water content (W) range of 0.0 to 4.3 kg m−2. Using this data set, the effects of corn vegetation on surface emissions are investigated by means of a semi-empirical radiative transfer model. Additionally, the impact of roughness on the surface emission is quantified using TB measurements over bare soil conditions. Subsequently, the estimated roughness parameters, ground measurements and horizontally (H)-polarized TB are employed to invert the H-polarized transmissivity (γh) for the monitored corn growing season. PMID:22163585

  13. Crack surface roughness in three-dimensional random fuse networks

    NASA Astrophysics Data System (ADS)

    Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan

    2006-08-01

    Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.

  14. L Band Brightness Temperature Observations Over a Corn Canopy During the Entire Growth Cycle

    NASA Technical Reports Server (NTRS)

    Joseph, Alicia T.; O'Neill, Peggy E.; Choudhury, Bhaskar J.; vanderVelde, Rogier; Lang, Roger H.; Gish, Timothy

    2011-01-01

    During a field campaign covering the 2002 corn growing season, a dual polarized tower mounted L-band (1.4 GHz) radiometer (LRAD) provided brightness temperature (T(sub B)) measurements at preset intervals, incidence and azimuth angles. These radiometer measurements were supported by an extensive characterization of land surface variables including soil moisture, soil temperature, vegetation biomass, and surface roughness. During the period from May 22, 2002 to August 30, 2002 a range of vegetation water content (W) of 0.0 to 4.3 kg/square m, ten days of radiometer and ground measurements were available. Using this data set, the effects of corn vegetation on surface emissions are investigated by means of a semi-empirical radiative transfer model. Additionally, the impact of roughness on the surface emission is quantified using T(sub B) measurements over bare soil conditions. Subsequently, the estimated roughness parameters, ground measurements and horizontally (H)-polarized T(sub B) are employed to invert the H-polarized transmissivity (gamma-h) for the monitored corn growing season.

  15. The solution of target assignment problem in command and control decision-making behaviour simulation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Huai, Wenqing; Wang, Shaodan

    2017-08-01

    C2 (command and control) has been understood to be a critical military component to meet an increasing demand for rapid information gathering and real-time decision-making in a dynamically changing battlefield environment. In this article, to improve a C2 behaviour model's reusability and interoperability, a behaviour modelling framework was proposed to specify a C2 model's internal modules and a set of interoperability interfaces based on the C-BML (coalition battle management language). WTA (weapon target assignment) is a typical C2 autonomous decision-making behaviour modelling problem. Different from most WTA problem descriptions, here sensors were considered to be available resources of detection and the relationship constraints between weapons and sensors were also taken into account, which brought it much closer to actual application. A modified differential evolution (MDE) algorithm was developed to solve this high-dimension optimisation problem and obtained an optimal assignment plan with high efficiency. In case study, we built a simulation system to validate the proposed C2 modelling framework and interoperability interface specification. Also, a new optimisation solution was used to solve the WTA problem efficiently and successfully.

  16. [The OPTIMISE study (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment]. Results for Luxembourg].

    PubMed

    Michel, G

    2012-01-01

    The OPTIMISE study (NCT00681850) has been run in six European countries, including Luxembourg, to prospectively assess the effect of benchmarking on the quality of primary care in patients with type 2 diabetes, using major modifiable vascular risk factors as critical quality indicators. Primary care centers treating type 2 diabetic patients were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). Primary endpoint was percentage of patients in the benchmarking group achieving pre-set targets of the critical quality indicators: glycated hemoglobin (HbAlc), systolic blood pressure (SBP) and low-density lipoprotein (LDL) cholesterol after 12 months follow-up. In Luxembourg, in the benchmarking group, more patients achieved target for SBP (40.2% vs. 20%) and for LDL-cholesterol (50.4% vs. 44.2%). 12.9% of patients in the benchmarking group met all three targets compared with patients in the control group (8.3%). In this randomized, controlled study, benchmarking was shown to be an effective tool for improving critical quality indicator targets, which are the principal modifiable vascular risk factors in diabetes type 2.

  17. Electromagnetic optimisation of a 2.45 GHz microwave plasma source operated at atmospheric pressure and designed for hydrogen production

    NASA Astrophysics Data System (ADS)

    Miotk, R.; Jasiński, M.; Mizeraczyk, J.

    2018-03-01

    This paper presents the partial electromagnetic optimisation of a 2.45 GHz cylindrical-type microwave plasma source (MPS) operated at atmospheric pressure. The presented device is designed for hydrogen production from liquid fuels, e.g. hydrocarbons and alcohols. Due to industrial requirements regarding low costs for hydrogen produced in this way, previous testing indicated that improvements were required to the electromagnetic performance of the MPS. The MPS has a duct discontinuity region, which is a result of the cylindrical structure located within the device. The microwave plasma is generated in this discontinuity region. Rigorous analysis of the region requires solving a set of Maxwell equations, which is burdensome for complicated structures. Furthermore, the presence of the microwave plasma increases the complexity of this task. To avoid calculating the complex Maxwell equations, we suggest the use of the equivalent circuit method. This work is based upon the idea of using a Weissfloch circuit to characterize the area of the duct discontinuity and the plasma. The resulting MPS equivalent circuit allowed the calculation of a capacitive metallic diaphragm, through which an improvement in the electromagnetic performance of the plasma source was obtained.

  18. Development of a decision support system for small reservoir irrigation systems in rainfed and drought prone areas.

    PubMed

    Balderama, Orlando F

    2010-01-01

    An integrated computer program called Cropping System and Water Management Model (CSWM) with a three-step feature (expert system-simulation-optimization) was developed to address a range of decision support for rainfed farming, i.e. crop selection, scheduling and optimisation. The system was used for agricultural planning with emphasis on sustainable agriculture in the rainfed areas through the use of small farm reservoirs for increased production and resource conservation and management. The application of the model was carried out using crop, soil, and climate and water resource data from the Philippines. Primarily, four sets of data representing the different rainfall classification of the country were collected, analysed, and used as input in the model. Simulations were also done on date of planting, probabilities of wet and dry period and with various capacities of the water reservoir used for supplemental irrigation. Through the analysis, useful information was obtained to determine suitable crops in the region, cropping schedule and pattern appropriate to the specific climate conditions. In addition, optimisation of the use of the land and water resources can be achieved in areas partly irrigated by small reservoirs.

  19. Computational aero-acoustics for fan duct propagation and radiation. Current status and application to turbofan liner optimisation

    NASA Astrophysics Data System (ADS)

    Astley, R. J.; Sugimoto, R.; Mustafi, P.

    2011-08-01

    Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.

  20. Simulations On Pair Creation In Collision Of γ-Beams Produced With High Intensity Lasers

    NASA Astrophysics Data System (ADS)

    Jansen, Oliver; Ribeyre, Xavier; D'Humieres, Emmanuel; Lobet, Mathieu; Jequier, Sophie; Tikhonchuk, Vladimir

    2016-10-01

    Direct production of electron-positron pairs in two photon collisions, the Breit-Wheeler process, is one of the most basic processes in the universe. However, this process has never been directly observed in the laboratory due to the lack of high intensity γ sources. For a feasibility study and for the optimisation of experimental set-ups we developed a high-performance tree-code. Different possible set-ups with MeV photon sources were discussed and compared using collision detection for huge number of particles in a quantum-electrodynamic regime. The authors acknowledge the financial support from the French National Research Agency (ANR) in the framework of ''The Investments for the Future'' programme IdEx Bordeaux - LAPHIA (ANR-10IDEX-03-02)-Project TULIMA.

  1. Computational studies on nonlinear optical property of novel Wittig-based Schiff-base ligands and copper(II) complex

    NASA Astrophysics Data System (ADS)

    Rajasekhar, Bathula; Patowary, Nidarshana; K. Z., Danish; Swu, Toka

    2018-07-01

    Hundred and forty-five novel molecules of Wittig-based Schiff-base (WSB), including copper(II) complex and precursors, were computationally screened for nonlinear optical (NLO) properties. WSB ligands were derived from various categories of amines and aldehydes. Wittig-based precursor aldehydes, (E)-2-hydroxy-5-(4-nitrostyryl)benzaldehyde (f) and 2-hydroxy-5-((1Z,3E)-4-phenylbuta-1,3-dien-1-yl) benzaldehyde (g) were synthesised and spectroscopically confirmed. Schiff-base ligands and copper(II) complex were designed, optimised and their NLO property was studied using GAUSSIAN09 computer program. For both optimisation and hyperpolarisability (finite-field approach) calculations, Density Functional Theory (DFT)-based B3LYP method was applied with LANL2DZ basis set for metal ion and 6-31G* basis set for C, H, N, O and Cl atoms. This is the first report to present the structure-activity relationship between hyperpolarisability (β) and WSB ligands containing mono imine group. The study reveals that Schiff-base ligands of the category N-2, which are the ones derived from the precursor aldehyde, 2-hydroxy-5-(4nitro-styryl)benzaldehyde and pre-polarised WSB coordinated with Cu(II), encoded as Complex-1 (β = 14.671 × 10-30 e.s.u) showed higher β values over other categories, N-1 and N-3, i.e. WSB derived from precursor aldehydes, 2-hydroxy-5-styrylbenzaldehyde and 2-hydroxy-5-((1Z,3E)-4-phenylbuta-1,3-dien-1-yl)benzaldehyde, respectively. For the first time here we report the geometrical isomeric effect on β value.

  2. Optimisation of a two-liquid component pre-filled acrylic bone cement system: a design of experiments approach to optimise cement final properties.

    PubMed

    Clements, James; Walker, Gavin; Pentlavalli, Sreekanth; Dunne, Nicholas

    2014-10-01

    The initial composition of acrylic bone cement along with the mixing and delivery technique used can influence its final properties and therefore its clinical success in vivo. The polymerisation of acrylic bone cement is complex with a number of processes happening simultaneously. Acrylic bone cement mixing and delivery systems have undergone several design changes in their advancement, although the cement constituents themselves have remained unchanged since they were first used. This study was conducted to determine the factors that had the greatest effect on the final properties of acrylic bone cement using a pre-filled bone cement mixing and delivery system. A design of experiments (DoE) approach was used to determine the impact of the factors associated with this mixing and delivery method on the final properties of the cement produced. The DoE illustrated that all factors present within this study had a significant impact on the final properties of the cement. An optimum cement composition was hypothesised and tested. This optimum recipe produced cement with final mechanical and thermal properties within the clinical guidelines and stated by ISO 5833 (International Standard Organisation (ISO), International standard 5833: implants for surgery-acrylic resin cements, 2002), however the low setting times observed would not be clinically viable and could result in complications during the surgical technique. As a result further development would be required to improve the setting time of the cement in order for it to be deemed suitable for use in total joint replacement surgery.

  3. Assessing the potential impacts of a revised set of on-farm nutrient and sediment 'basic' control measures for reducing agricultural diffuse pollution across England.

    PubMed

    Collins, A L; Newell Price, J P; Zhang, Y; Gooday, R; Naden, P S; Skirvin, D

    2018-04-15

    The need for improved abatement of agricultural diffuse water pollution represents cause for concern throughout the world. A critical aspect in the design of on-farm intervention programmes concerns the potential technical cost-effectiveness of packages of control measures. The European Union (EU) Water Framework Directive (WFD) calls for Programmes of Measures (PoMs) to protect freshwater environments and these comprise 'basic' (mandatory) and 'supplementary' (incentivised) options. Recent work has used measure review, elicitation of stakeholder attitudes and a process-based modelling framework to identify a new alternative set of 'basic' agricultural sector control measures for nutrient and sediment abatement across England. Following an initial scientific review of 708 measures, 90 were identified for further consideration at an industry workshop and 63 had industry support. Optimisation modelling was undertaken to identify a shortlist of measures using the Demonstration Test Catchments as sentinel agricultural landscapes. Optimisation selected 12 measures relevant to livestock or arable systems. Model simulations of 95% implementation of these 12 candidate 'basic' measures, in addition to business-as-usual, suggested reductions in the national agricultural nitrate load of 2.5%, whilst corresponding reductions in phosphorus and sediment were 11.9% and 5.6%, respectively. The total cost of applying the candidate 'basic' measures across the whole of England was estimated to be £450 million per annum, which is equivalent to £52 per hectare of agricultural land. This work contributed to a public consultation in 2016. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  4. Evolutionary and Neural Computing Based Decision Support System for Disease Diagnosis from Clinical Data Sets in Medical Practice.

    PubMed

    Sudha, M

    2017-09-27

    As a recent trend, various computational intelligence and machine learning approaches have been used for mining inferences hidden in the large clinical databases to assist the clinician in strategic decision making. In any target data the irrelevant information may be detrimental, causing confusion for the mining algorithm and degrades the prediction outcome. To address this issue, this study attempts to identify an intelligent approach to assist disease diagnostic procedure using an optimal set of attributes instead of all attributes present in the clinical data set. In this proposed Application Specific Intelligent Computing (ASIC) decision support system, a rough set based genetic algorithm is employed in pre-processing phase and a back propagation neural network is applied in training and testing phase. ASIC has two phases, the first phase handles outliers, noisy data, and missing values to obtain a qualitative target data to generate appropriate attribute reduct sets from the input data using rough computing based genetic algorithm centred on a relative fitness function measure. The succeeding phase of this system involves both training and testing of back propagation neural network classifier on the selected reducts. The model performance is evaluated with widely adopted existing classifiers. The proposed ASIC system for clinical decision support has been tested with breast cancer, fertility diagnosis and heart disease data set from the University of California at Irvine (UCI) machine learning repository. The proposed system outperformed the existing approaches attaining the accuracy rate of 95.33%, 97.61%, and 93.04% for breast cancer, fertility issue and heart disease diagnosis.

  5. AFM surface imaging of AISI D2 tool steel machined by the EDM process

    NASA Astrophysics Data System (ADS)

    Guu, Y. H.

    2005-04-01

    The surface morphology, surface roughness and micro-crack of AISI D2 tool steel machined by the electrical discharge machining (EDM) process were analyzed by means of the atomic force microscopy (AFM) technique. Experimental results indicate that the surface texture after EDM is determined by the discharge energy during processing. An excellent machined finish can be obtained by setting the machine parameters at a low pulse energy. The surface roughness and the depth of the micro-cracks were proportional to the power input. Furthermore, the AFM application yielded information about the depth of the micro-cracks is particularly important in the post treatment of AISI D2 tool steel machined by EDM.

  6. A spatial picture of the synthetic large-scale motion from dynamic roughness

    NASA Astrophysics Data System (ADS)

    Huynh, David; McKeon, Beverley

    2017-11-01

    Jacobi and McKeon (2011) set up a dynamic roughness apparatus to excite a synthetic, travelling wave-like disturbance in a wind tunnel, boundary layer study. In the present work, this dynamic roughness has been adapted for a flat-plate, turbulent boundary layer experiment in a water tunnel. A key advantage of operating in water as opposed to air is the longer flow timescales. This makes accessible higher non-dimensional actuation frequencies and correspondingly shorter synthetic length scales, and is thus more amenable to particle image velocimetry. As a result, this experiment provides a novel spatial picture of the synthetic mode, the coupled small scales, and their streamwise development. It is demonstrated that varying the roughness actuation frequency allows for significant tuning of the streamwise wavelength of the synthetic mode, with a range of 3 δ-13 δ being achieved. Employing a phase-locked decomposition, spatial snapshots are constructed of the synthetic large scale and used to analyze its streamwise behavior. Direct spatial filtering is used to separate the synthetic large scale and the related small scales, and the results are compared to those obtained by temporal filtering that invokes Taylor's hypothesis. The support of AFOSR (Grant # FA9550-16-1-0361) is gratefully acknowledged.

  7. Fractal analysis as a potential tool for surface morphology of thin films

    NASA Astrophysics Data System (ADS)

    Soumya, S.; Swapna, M. S.; Raj, Vimal; Mahadevan Pillai, V. P.; Sankararaman, S.

    2017-12-01

    Fractal geometry developed by Mandelbrot has emerged as a potential tool for analyzing complex systems in the diversified fields of science, social science, and technology. Self-similar objects having the same details in different scales are referred to as fractals and are analyzed using the mathematics of non-Euclidean geometry. The present work is an attempt to correlate fractal dimension for surface characterization by Atomic Force Microscopy (AFM). Taking the AFM images of zinc sulphide (ZnS) thin films prepared by pulsed laser deposition (PLD) technique, under different annealing temperatures, the effect of annealing temperature and surface roughness on fractal dimension is studied. The annealing temperature and surface roughness show a strong correlation with fractal dimension. From the regression equation set, the surface roughness at a given annealing temperature can be calculated from the fractal dimension. The AFM images are processed using Photoshop and fractal dimension is calculated by box-counting method. The fractal dimension decreases from 1.986 to 1.633 while the surface roughness increases from 1.110 to 3.427, for a change of annealing temperature 30 ° C to 600 ° C. The images are also analyzed by power spectrum method to find the fractal dimension. The study reveals that the box-counting method gives better results compared to the power spectrum method.

  8. Effect of Physicochemical Anomalies of Soda-Lime Silicate Slides on Biomolecule Immobilization

    DTIC Science & Technology

    2009-01-01

    roughness. EXPERIMENTAL SECTION Materials. Standard soda - lime glass microscope slides were obtained from several sources (Table 1). Rabbit anti-lipid A...had changed, confir- mation was obtained from the manufacturers that slides in set A1 were the same soda - lime glass slides as those in set A2 and...manufacture of soda - lime glass slides. X-ray Photoelectron Spectroscopy (XPS). To identify el- emental disparities in the glass surface, relative atomic

  9. Light Scattering from Rough Surfaces. Appendix. Angular Correlation of Speckle Patterns. Draft

    DTIC Science & Technology

    1994-06-01

    For his demonstrations of the various experimental techniques, I owe thanks to Andrew Sant. Also, on behalf of all students writing (and written) up ...less controllable, radar set up . 1.1.1 Theoretical Models This section will present some of the theoretical models which exist for determining the...centre of a turntable set up to spin at :300 revolutions per minute. While the turntable is stationary, photoresist is applied to the centre of the

  10. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study.

    PubMed

    De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W

    2017-12-01

    Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was 0.82. The overall performance of the presented automated pilot technique to stage lower third molar development on panoramic radiographs was similar to staging by human observers. It will be further optimised in future research, since it represents a necessary step to achieve a fully automated dental age estimation method, which to date is not available.

  11. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  12. Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity

    PubMed Central

    Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates

    2013-01-01

    A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254

  13. Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.

    PubMed

    Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir

    2014-01-01

    Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Optimal integrated management of groundwater resources and irrigated agriculture in arid coastal regions

    NASA Astrophysics Data System (ADS)

    Grundmann, J.; Schütze, N.; Heck, V.

    2014-09-01

    Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.

  15. Robustness analysis of bogie suspension components Pareto optimised values

    NASA Astrophysics Data System (ADS)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  16. Coupling systematic planning and expert judgement enhances the efficiency of river restoration.

    PubMed

    Langhans, Simone D; Gessner, Jörn; Hermoso, Virgilio; Wolter, Christian

    2016-08-01

    Ineffectiveness of current river restoration practices hinders the achievement of ecological quality targets set by country-specific regulations. Recent advances in river restoration help planning efforts more systematically to reach ecological targets at the least costs. However, such approaches are often desktop-based and overlook real-world constraints. We argue that combining two techniques commonly used in the conservation arena - expert judgement and systematic planning - will deliver cost-effective restoration plans with a high potential for implementation. We tested this idea targeting the restoration of spawning habitat, i.e. gravel bars, for 11 rheophilic fish species along a river system in Germany (Havel-Spree rivers). With a group of local fish experts, we identified the location and extent of potential gravel bars along the rivers and necessary improvements to migration barriers to ensure fish passage. Restoration cost of each gravel bar included the cost of the action itself plus a fraction of the cost necessary to ensure longitudinal connectivity by upgrading or building fish passages located downstream. We set restoration targets according to the EU Water Framework Directive, i.e. relative abundance of 11 fish species in the reference community and optimised a restoration plan by prioritising a subset of restoration sites from the full set of identified sites, using the conservation planning software Marxan. Out of the 66 potential gravel bars, 36 sites which were mainly located in the downstream section of the system were selected, reflecting their cost-effectiveness given that fewer barriers needed intervention. Due to the limited overall number of sites that experts identified as being suitable for restoring spawning habitat, reaching abundance-targets was challenged. We conclude that coupling systematic river restoration planning with expert judgement produces optimised restoration plans that account for on-the-ground implementation constraints. If applied, this approach has a high potential to enhance overall efficiency of future restoration efforts. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Identifying a Comparison for Matching Rough Voice Quality

    ERIC Educational Resources Information Center

    Patel, Sona; Shrivastav, Rahul; Eddins, David A.

    2012-01-01

    Purpose: Perceptual estimates of voice quality obtained using rating scales are subject to contextual biases that influence how individuals assign numbers to estimate the magnitude of vocal quality. Because rating scales are commonly used in clinical settings, assessments of voice quality are also subject to the limitations of these scales.…

  18. Survey on Classifying Human Actions Through Visual Sensors

    DTIC Science & Technology

    2011-05-04

    47] Herrera, A., Beck , A., Bell, D., Miller, P., Wu, Q., Yan, W., “Behaviour Analysis and Prediction in Image Sequences Using Rough Sets...report TR-97-021, University of Berkeley, 1998 [83] DARPA Mind’s Eye Broad Agency Announcement, DARPA- BAA -10-53, 2010 www.darpa.mil/tcto/docs

  19. Title Sequences, Dress, Settings, and Such.

    ERIC Educational Resources Information Center

    Bell, John

    Comparisons of television shows along genre lines suggest significant elements of aural/visual richness as well as valuable categories of comparison for future use in other comparisons. An examination of two sitcoms and two police shows produced roughly 25 years apart--"Make Room for Daddy" with "The Cosby Show" and "Naked…

  20. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.

Top