NASA Astrophysics Data System (ADS)
Harré, Michael S.
2013-02-01
Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.
Optimisation of confinement in a fusion reactor using a nonlinear turbulence model
NASA Astrophysics Data System (ADS)
Highcock, E. G.; Mandell, N. R.; Barnes, M.
2018-04-01
The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
Emergence of scale-free characteristics in socio-ecological systems with bounded rationality
Kasthurirathna, Dharshana; Piraveenan, Mahendra
2015-01-01
Socio–ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback–-Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio–ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems. PMID:26065713
Emergence of scale-free characteristics in socio-ecological systems with bounded rationality.
Kasthurirathna, Dharshana; Piraveenan, Mahendra
2015-06-11
Socio-ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback--Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio-ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems.
Bringing metabolic networks to life: convenience rate law and thermodynamic constraints
Liebermeister, Wolfram; Klipp, Edda
2006-01-01
Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669
NASA Astrophysics Data System (ADS)
Tsujimura, T., Ii; Kubo, S.; Takahashi, H.; Makino, R.; Seki, R.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Ida, K.; Suzuki, C.; Emoto, M.; Yokoyama, M.; Kobayashi, T.; Moon, C.; Nagaoka, K.; Osakabe, M.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Okada, K.; Ejiri, A.; Mutoh, T.
2015-11-01
The central electron temperature has successfully reached up to 7.5 keV in large helical device (LHD) plasmas with a central high-ion temperature of 5 keV and a central electron density of 1.3× {{10}19} m-3. This result was obtained by heating with a newly-installed 154 GHz gyrotron and also the optimisation of injection geometry in electron cyclotron heating (ECH). The optimisation was carried out by using the ray-tracing code ‘LHDGauss’, which was upgraded to include the rapid post-processing three-dimensional (3D) equilibrium mapping obtained from experiments. For ray-tracing calculations, LHDGauss can automatically read the relevant data registered in the LHD database after a discharge, such as ECH injection settings (e.g. Gaussian beam parameters, target positions, polarisation and ECH power) and Thomson scattering diagnostic data along with the 3D equilibrium mapping data. The equilibrium map of the electron density and temperature profiles are then extrapolated into the region outside the last closed flux surface. Mode purity, or the ratio between the ordinary mode and the extraordinary mode, is obtained by calculating the 1D full-wave equation along the direction of the rays from the antenna to the absorption target point. Using the virtual magnetic flux surfaces, the effects of the modelled density profiles and the magnetic shear at the peripheral region with a given polarisation are taken into account. Power deposition profiles calculated for each Thomson scattering measurement timing are registered in the LHD database. The adjustment of the injection settings for the desired deposition profile from the feedback provided on a shot-by-shot basis resulted in an effective experimental procedure.
Larsson, Niklas; Utterback, Karl; Toräng, Lars; Risberg, Johan; Gustafsson, Per; Mayer, Philipp; Jönsson, Jan Ke
2009-08-01
Hollow fibre (HF) membrane modules were applied in continuous mode for equilibrium sampling through membranes (ESTM) of polar organic pollutants. Phenolic compounds (chlorophenols, cresols and phenol) served as model substances and ESTM was tuned towards the measurement of freely dissolved concentrations (C(free)). HF membrane modules were constructed using thin-walled membrane, 1-m module length and low packing density in order to optimise the uptake kinetics of the analytes into the acceptor solution. Such custom made devices were tested and compared to commercially available modules. The former modules performed best for continuous ESTM. The custom made modules provided steady-state equilibrium within 20-40 min and enrichment that was in general agreement with calculated distribution ratios between acceptor and sample. In experiments during which sample concentration was changed, acceptor response time to decreased sample concentration was around 30 min for custom built modules. In the presence of commercial humic acids, analytes showed lower steady-state enrichment, which is due to a decrease in C(free). Continuous ESTM may be automated and is suggested for use in online determination of C(free) of pollutants and studies on sorption of pollutants. Future studies should include optimisation of the membrane liquid and factors regarding the residence time of the acceptor solution in the fibre lumen. Qualitative aspects of DOM should also be included, as natural DOM can be fractionated. C(free) could be correlated to DOM properties that have previously been shown to influence sorption, such as aromaticity, carboxylic acid content and molecular size.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
Pricing, manufacturing and inventory policies for raw material in a three-level supply chain
NASA Astrophysics Data System (ADS)
Allah Taleizadeh, Ata; Noori-daryan, Mahsa
2016-03-01
We studied a decentralised three-layer supply chain including a supplier, a producer and some retailers. All the retailers order their demands to the producer and the producer order his demands to the supplier. We assumed that the demand is price sensitive and shortage is not permitted. The goal of the paper is to optimise the total cost of the supply chain network by coordinating decision-making policy using Stackelberg-Nash equilibrium. The decision variables of our model are the supplier's price, the producer's price and the number of shipments received by the supplier and producer, respectively. To illustrate the applicability of the proposed model numerical examples are presented.
Biosorption of Ag(I) from aqueous solutions by Klebsiella sp. 3S1.
Muñoz, Antonio Jesús; Espínola, Francisco; Ruiz, Encarnación
2017-05-05
This study investigated the potential ability of Klebsiella sp. 3S1 to remove silver cations from aqueous solutions. The selected strain is a ubiquitous bacterium selected from among several microorganisms that had been isolated from wastewaters. To optimise the operating conditions in the biosorption process, a Rotatable Central Composite Experimental Design was developed establishing pH, temperature and biomass concentration as independent variables. Interaction mechanisms involved were analysed through kinetic and equilibrium studies. The experimental results suit pseudo-second order kinetics with two biosorption stages, being the first almost instantly. The Langmuir equilibrium model predicted a maximum capacity of biosorption (q) of 114.1mg Ag/g biomass. The study of the mechanisms involved in the biosorption was completed by employing advanced techniques which revealed that both bacterium-surface interactions and intracellular bioaccumulation participate in silver removal from aqueous solutions. The ability of Klebsiella sp. 3S1 to form silver chloride nanoparticles with interesting potential applications was also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, J.; Medlyn, B.; De Kauwe, M. G.; Duursma, R.
2017-12-01
Leaf Area Index (LAI) is a key variable in modelling terrestrial vegetation, because it has a major impact on carbon, water and energy fluxes. However, LAI is difficult to predict: several recent intercomparisons have shown that modelled LAI differs significantly among models, and between models and satellite-derived estimates. Empirical studies show that long-term mean LAI is strongly related to mean annual precipitation. This observation is predicted by the theory of ecohydrological equilibrium, which provides a promising alternative means to predict steady-state LAI. We implemented this theory in a simple optimisation model. We hypothesized that, when water availability is limited, plants should adjust long-term LAI and stomatal behavior (g1) to maximize net canopy carbon export, under the constraint that canopy transpiration is a fixed fraction of total precipitation. We evaluated the predicted LAI (Lopt) for Australia against ground-based observations of LAI at 135 sites, and continental-scale satellite-derived estimates. For the site-level data, the RMSE of predicted Lopt was 0.14 m2 m-2, which was similar to the RMSE of a comparison of the data against nine-year mean satellite-derived LAI at those sites. Continentally, Lopt had a R2 of over 70% when compared to satellite-derived LAI, which is comparable to the R2 obtained when different satellite products are compared against each other. The predicted response of Lopt to the increase in atmospheric CO2 over the last 30 years also agreed with the estimate based on satellite-derivatives. Our results indicate that long-term equilibrium LAI can be successfully predicted from a simple application of ecohydrological theory. We suggest that this theory could be usefully incorporated into terrestrial vegetation models to improve their predictions of LAI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vives i Batlle, J.; Beresford, N. A.; Beaugelin-Seiller, K.
We report an inter-comparison of eight models designed to predict the radiological exposure of radionuclides in marine biota. The models were required to simulate dynamically the uptake and turnover of radionuclides by marine organisms. Model predictions of radionuclide uptake and turnover using kinetic calculations based on biological half-life (TB1/2) and/or more complex metabolic modelling approaches were used to predict activity concentrations and, consequently, dose rates of 90Sr, 131I and 137Cs to fish, crustaceans, macroalgae and molluscs under circumstances where the water concentrations are changing with time. For comparison, the ERICA Tool, a model commonly used in environmental assessment, and whichmore » uses equilibrium concentration ratios, was also used. As input to the models we used hydrodynamic forecasts of water and sediment activity concentrations using a simulated scenario reflecting the Fukushima accident releases. Although model variability is important, the intercomparison gives logical results, in that the dynamic models predict consistently a pattern of delayed rise of activity concentration in biota and slow decline instead of the instantaneous equilibrium with the activity concentration in seawater predicted by the ERICA Tool. The differences between ERICA and the dynamic models increase the shorter the TB1/2 becomes; however, there is significant variability between models, underpinned by parameter and methodological differences between them. The need to validate the dynamic models used in this intercomparison has been highlighted, particularly in regards to optimisation of the model biokinetic parameters.« less
A cooperation model based on CVaR measure for a two-stage supply chain
NASA Astrophysics Data System (ADS)
Xu, Xinsheng; Meng, Zhiqing; Shen, Rui
2015-07-01
In this paper, we introduce a cooperation model (CM) for the two-stage supply chain consisting of a manufacturer and a retailer. In this model, it is supposed that the objective of the manufacturer is to maximise his/her profit while the objective of the retailer is to minimise his/her CVaR while controlling the risk originating from fluctuation in market demand. In reality, the manufacturer and the retailer would like to choose their own decisions as to wholesale price and order quantity to optimise their own objectives, resulting the fact that the expected decision of the manufacturer and that of the retailer may conflict with each other. Then, to achieve cooperation, the manufacturer and the retailer both need to give some concessions. The proposed model aims to coordinate the decisions of the manufacturer and the retailer, and balance the concessions of the two in their cooperation. We introduce an s* - optimal equilibrium solution in this model, which can decide the minimum concession that the manufacturer and the retailer need to give for their cooperation, and prove that the s* - optimal equilibrium solution can be obtained by solving a goal programming problem. Further, the case of different concessions made by the manufacturer and the retailer is also discussed. Numerical results show that the CM is efficient in dealing with the cooperations between the supplier and the retailer.
Vives I Batlle, J; Beresford, N A; Beaugelin-Seiller, K; Bezhenar, R; Brown, J; Cheng, J-J; Ćujić, M; Dragović, S; Duffa, C; Fiévet, B; Hosseini, A; Jung, K T; Kamboj, S; Keum, D-K; Kryshev, A; LePoire, D; Maderich, V; Min, B-I; Periáñez, R; Sazykina, T; Suh, K-S; Yu, C; Wang, C; Heling, R
2016-03-01
We report an inter-comparison of eight models designed to predict the radiological exposure of radionuclides in marine biota. The models were required to simulate dynamically the uptake and turnover of radionuclides by marine organisms. Model predictions of radionuclide uptake and turnover using kinetic calculations based on biological half-life (TB1/2) and/or more complex metabolic modelling approaches were used to predict activity concentrations and, consequently, dose rates of (90)Sr, (131)I and (137)Cs to fish, crustaceans, macroalgae and molluscs under circumstances where the water concentrations are changing with time. For comparison, the ERICA Tool, a model commonly used in environmental assessment, and which uses equilibrium concentration ratios, was also used. As input to the models we used hydrodynamic forecasts of water and sediment activity concentrations using a simulated scenario reflecting the Fukushima accident releases. Although model variability is important, the intercomparison gives logical results, in that the dynamic models predict consistently a pattern of delayed rise of activity concentration in biota and slow decline instead of the instantaneous equilibrium with the activity concentration in seawater predicted by the ERICA Tool. The differences between ERICA and the dynamic models increase the shorter the TB1/2 becomes; however, there is significant variability between models, underpinned by parameter and methodological differences between them. The need to validate the dynamic models used in this intercomparison has been highlighted, particularly in regards to optimisation of the model biokinetic parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Integrated Experimental and Modelling Research for Non-Ferrous Smelting and Recycling Systems
NASA Astrophysics Data System (ADS)
Jak, Evgueni; Hidayat, Taufiq; Shishin, Denis; Mehrjardi, Ata Fallah; Chen, Jiang; Decterov, Sergei; Hayes, Peter
The chemistries of industrial pyrometallurgical non-ferrous smelting and recycling processes are becoming increasingly complex. Optimisation of process conditions, charge composition, temperature, oxygen partial pressure, and partitioning of minor elements between phases and different process streams require accurate description of phase equilibria and thermodynamics which are the focus of the present research. The experiments involve high temperature equilibration in controlled gas atmospheres, rapid quenching and direct measurement of equilibrium phase compositions with quantitative microanalytical techniques including electron probe X-ray microanalysis and Laser Ablation ICP-MS. The thermodynamic modelling is undertaken using computer package FactSage with the quasi-chemical model for the liquid slag phase and other advanced models. Experimental and modelling studies are combined into an integrated research program focused on the major elements Cu-Pb-Fe-O-Si-S system, slagging Al, Ca, Mg and other minor elements. The ongoing development of the research methodologies has resulted in significant advances in research capabilities. Examples of applications are given.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
Mission Command: Elasticity, Equilibrium, Culture, and Intent
2006-11-01
évidence « l’élasticité » de l’organisation. Comparativement à l’organisation décentralisée, une force centralisée possède une réserve beaucoup plus...avoir une faible intention implicite potentiel comparativement à une autre. La mesure dans laquelle la culture de commandement de l’organisation...is that the organisation is optimised, by design , to operate at this point on the continuum. Centralised Decentralised Shared Intent Implicit
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
NASA Astrophysics Data System (ADS)
Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç
2017-10-01
This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.
Muñoz, Antonio Jesús; Espínola, Francisco; Moya, Manuel; Ruiz, Encarnación
2015-01-01
Lead biosorption by Klebsiella sp. 3S1 isolated from a wastewater treatment plant was investigated through a Rotatable Central Composite Experimental Design. The optimisation study indicated the following optimal values of operating variables: 0.4 g/L of biosorbent dosage, pH 5, and 34°C. According to the results of the kinetic studies, the biosorption process can be described by a two-step process, one rapid, almost instantaneous, and one slower, both contributing significantly to the overall biosorption; the model that best fits the experimental results was pseudo-second order. The equilibrium studies showed a maximum lead uptake value of 140.19 mg/g according to the Langmuir model. The mechanism study revealed that lead ions were bioaccumulated into the cytoplasm and adsorbed on the cell surface. The bacterium Klebsiella sp. 3S1 has a good potential in the bioremoval of lead in an inexpensive and effective process. PMID:26504824
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Kumaravel
2017-02-01
The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method
Giri, Anupam; Zelinkova, Zuzana; Wenzl, Thomas
2017-12-01
For the implementation of Regulation (EC) No 2065/2003 related to smoke flavourings used or intended for use in or on foods a method based on solid-phase micro extraction (SPME) GC/MS was developed for the characterisation of liquid smoke products. A statistically based experimental design (DoE) was used for method optimisation. The best general conditions to quantitatively analyse the liquid smoke compounds were obtained with a polydimethylsiloxane/divinylbenzene (PDMS/DVB) fibre, 60°C extraction temperature, 30 min extraction time, 250°C desorption temperature, 180 s desorption time, 15 s agitation time, and 250 rpm agitation speed. Under the optimised conditions, 119 wood pyrolysis products including furan/pyran derivatives, phenols, guaiacol, syringol, benzenediol, and their derivatives, cyclic ketones, and several other heterocyclic compounds were identified. The proposed method was repeatable (RSD% <5) and the calibration functions were linear for all compounds under study. Nine isotopically labelled internal standards were used for improving quantification of analytes by compensating matrix effects that might affect headspace equilibrium and extractability of compounds. The optimised isotope dilution SPME-GC/MS based analytical method proved to be fit for purpose, allowing the rapid identification and quantification of volatile compounds in liquid smoke flavourings.
NASA Astrophysics Data System (ADS)
Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng
2018-04-01
It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.
NASA Astrophysics Data System (ADS)
Liu, Ming; Zhao, Lindu
2012-08-01
Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Distributed optimisation problem with communication delay and external disturbance
NASA Astrophysics Data System (ADS)
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
Critical review of membrane bioreactor models--part 2: hydrodynamic and integrated models.
Naessens, W; Maere, T; Ratkovich, N; Vedantam, S; Nopens, I
2012-10-01
Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can heavily benefit from mathematical modelling. In this paper, the vast literature on hydrodynamic and integrated MBR modelling is critically reviewed. Hydrodynamic models are used at different scales and focus mainly on fouling and only little on system design/optimisation. Integrated models also focus on fouling although the ones including costs are leaning towards optimisation. Trends are discussed, knowledge gaps identified and interesting routes for further research suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.
Rani, K; Jahnen, A; Noel, A; Wolf, D
2015-07-01
In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A
2014-11-01
In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.
Multiobjective optimisation of bogie suspension to boost speed on curves
NASA Astrophysics Data System (ADS)
Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor
2016-01-01
To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.
CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox
NASA Astrophysics Data System (ADS)
Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano
2018-03-01
Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz
2018-03-01
The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-06-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Midbond basis functions for weakly bound complexes
NASA Astrophysics Data System (ADS)
Shaw, Robert A.; Hill, J. Grant
2018-06-01
Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.
2018-02-01
The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S
2015-01-01
Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
Modelling of auctioning mechanism for solar photovoltaic capacity
NASA Astrophysics Data System (ADS)
Poullikkas, Andreas
2016-10-01
In this work, a modified optimisation model for the integration of renewable energy sources for power-generation (RES-E) technologies in power-generation systems on a unit commitment basis is developed. The purpose of the modified optimisation procedure is to account for RES-E capacity auctions for different solar photovoltaic (PV) capacity electricity prices. The optimisation model developed uses a genetic algorithm (GA) technique for the calculation of the required RES-E levy (or green tax) in the electricity bills. Also, the procedure enables the estimation of the level of the adequate (or eligible) feed-in-tariff to be offered to future RES-E systems, which do not participate in the capacity auctioning procedure. In order to demonstrate the applicability of the optimisation procedure developed the case of PV capacity auctioning for commercial systems is examined. The results indicated that the required green tax, in order to promote the use of RES-E technologies, which is charged to the electricity customers through their electricity bills, is reduced with the reduction in the final auctioning price. This has a significant effect related to the reduction of electricity bills.
Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P
2008-11-30
In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.
Dynamic least-cost optimisation of wastewater system remedial works requirements.
Vojinovic, Z; Solomatine, D; Price, R K
2006-01-01
In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.
Optimisation of the supercritical extraction of toxic elements in fish oil.
Hajeb, P; Jinap, S; Shakibazadeh, Sh; Afsah-Hejri, L; Mohebbi, G H; Zaidul, I S M
2014-01-01
This study aims to optimise the operating conditions for the supercritical fluid extraction (SFE) of toxic elements from fish oil. The SFE operating parameters of pressure, temperature, CO2 flow rate and extraction time were optimised using a central composite design (CCD) of response surface methodology (RSM). High coefficients of determination (R²) (0.897-0.988) for the predicted response surface models confirmed a satisfactory adjustment of the polynomial regression models with the operation conditions. The results showed that the linear and quadratic terms of pressure and temperature were the most significant (p < 0.05) variables affecting the overall responses. The optimum conditions for the simultaneous elimination of toxic elements comprised a pressure of 61 MPa, a temperature of 39.8ºC, a CO₂ flow rate of 3.7 ml min⁻¹ and an extraction time of 4 h. These optimised SFE conditions were able to produce fish oil with the contents of lead, cadmium, arsenic and mercury reduced by up to 98.3%, 96.1%, 94.9% and 93.7%, respectively. The fish oil extracted under the optimised SFE operating conditions was of good quality in terms of its fatty acid constituents.
Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area
NASA Astrophysics Data System (ADS)
Khare, Vikas; Nema, Savita; Baredar, Prashant
2017-04-01
This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.
Optimisation of an idealised primitive equation ocean model using stochastic parameterization
NASA Astrophysics Data System (ADS)
Cooper, Fenwick C.
2017-05-01
Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.
Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.
Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne
2017-01-01
Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.
Power law-based local search in spider monkey optimisation for lower order system modelling
NASA Astrophysics Data System (ADS)
Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala
2017-01-01
The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.
Ławryńczuk, Maciej
2017-03-01
This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.
2015-10-01
In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian
2009-06-26
This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.
Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
NASA Astrophysics Data System (ADS)
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
Optimisation of SIW bandpass filter with wide and sharp stopband using space mapping
NASA Astrophysics Data System (ADS)
Xu, Juan; Bi, Jun Jian; Li, Zhao Long; Chen, Ru shan
2016-12-01
This work presents a substrate integrated waveguide (SIW) bandpass filter with wide and precipitous stopband, which is different from filters with a direct input/output coupling structure. Higher modes in the SIW cavities are used to generate the finite transmission zeros for improved stopband performance. The design of SIW filters requires full wave electromagnetic simulation and extensive optimisation. If a full wave solver is used for optimisation, the design process is very time consuming. The space mapping (SM) approach has been called upon to alleviate this problem. In this case, the coarse model is optimised using an equivalent circuit model-based representation of the structure for fast computations. On the other hand, the verification of the design is completed with an accurate fine model full wave simulation. A fourth-order filter with a passband of 12.0-12.5 GHz is fabricated on a single layer Rogers RT/Duroid 5880 substrate. The return loss is better than 17.4 dB in the passband and the rejection is more than 40 dB in the stopband. The stopband is from 2 to 11 GHz and 13.5 to 17.3 GHz, demonstrating a wide bandwidth performance.
Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata
2015-03-15
Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios
2018-05-02
Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.
Biomass supply chain optimisation for Organosolv-based biorefineries.
Giarola, Sara; Patel, Mayank; Shah, Nilay
2014-05-01
This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
NASA Astrophysics Data System (ADS)
Kies, Alexander
2018-02-01
To meet European decarbonisation targets by 2050, the electrification of the transport sector is mandatory. Most electric vehicles rely on lithium-ion batteries, because they have a higher energy/power density and longer life span compared to other practical batteries such as zinc-carbon batteries. Electric vehicles can thus provide energy storage to support the system integration of generation from highly variable renewable sources, such as wind and photovoltaics (PV). However, charging/discharging causes batteries to degradate progressively with reduced capacity. In this study, we investigate the impact of the joint optimisation of arbitrage revenue and battery degradation of electric vehicle batteries in a simplified setting, where historical prices allow for market participation of battery electric vehicle owners. It is shown that the joint optimisation of both leads to stronger gains then the sum of both optimisation strategies and that including battery degradation into the model avoids state of charges close to the maximum at times. It can be concluded that degradation is an important aspect to consider in power system models, which incorporate any kind of lithium-ion battery storage.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias
2018-01-31
Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.
2012-04-01
For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.
Non-equilibrium diffusion combustion of a fuel droplet
NASA Astrophysics Data System (ADS)
Tyurenkova, Veronika V.
2012-06-01
A mathematical model for the non-equilibrium combustion of droplets in rocket engines is developed. This model allows to determine the divergence of combustion rate for the equilibrium and non-equilibrium model. Criterion for droplet combustion deviation from equilibrium is introduced. It grows decreasing droplet radius, accommodation coefficient, temperature and decreases on decreasing diffusion coefficient. Also divergence from equilibrium increases on reduction of droplet radius. Droplet burning time essentially increases under non-equilibrium conditions. Comparison of theoretical and experimental data shows that to have adequate solution for small droplets it is necessary to use the non-equilibrium model.
Koo, B K; O'Connell, P E
2006-04-01
The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole
2017-01-01
To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006-2007. Diet cost was estimated from mean national food prices (2006-2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d.
Børretzen, P; Salbu, B
2000-10-30
To assess the impact of radionuclides entering the marine environment from dumped nuclear waste, information on the physico-chemical forms of radionuclides and their mobility in seawater-sediment systems is essential. Due to interactions with sediment components, sediments may act as a sink, reducing the mobility of radionuclides in seawater. Due to remobilisation, however, contaminated sediments may also act as a potential source of radionuclides to the water phase. In the present work, time-dependent interactions of low molecular mass (LMM, i.e. species < 10 kDa) radionuclides with sediments from the Stepovogo Fjord, Novaya Zemlya and their influence on the distribution coefficients (Kd values) have been studied in tracer experiments using 109Cd2+ and 60Co2+ as gamma tracers. Sorption of the LMM tracers occurred rapidly and the estimated equilibrium Kd(eq)-values for 109Cd and 60Co were 500 and 20000 ml/g, respectively. Remobilisation of 109Cd and 60Co from contaminated sediment fractions as a function of contact time was studied using sequential extraction procedures. Due to redistribution, the reversibly bound fraction of the gamma tracers decreased with time, while the irreversibly (or slowly reversibly) associated fraction of the gamma tracers increased. Two different three-compartment models, one consecutive and one parallel, were applied to describe the time-dependent interaction of the LMM tracers with operationally defined reversible and irreversible (or slowly reversible) sediment fractions. The interactions between these fractions were described using first order differential equations. By fitting the models to the experimental data, apparent rate constants were obtained using numerical optimisation software. The model optimisations showed that the interactions of LMM 60Co were well described by the consecutive model, while the parallel model was more suitable to describe the interactions of LMM 109Cd with the sediments, when the squared sum of residuals were compared. The rate of sorption of the irreversibly (or slowly reversibly) associated fraction was greater than the rate of desorption of the reversibly bound fractions (i.e. k3 > k2) for both radionuclides. Thus, the Novaya Zemlya sediment are supposed to act as a sink for the radionuclides under oxic conditions, and transport to the water phase should mainly be attributed to resuspended particles.
The development and optimisation of 3D black-blood R2* mapping of the carotid artery wall.
Yuan, Jianmin; Graves, Martin J; Patterson, Andrew J; Priest, Andrew N; Ruetten, Pascal P R; Usman, Ammara; Gillard, Jonathan H
2017-12-01
To develop and optimise a 3D black-blood R 2 * mapping sequence for imaging the carotid artery wall, using optimal blood suppression and k-space view ordering. Two different blood suppression preparation methods were used; Delay Alternating with Nutation for Tailored Excitation (DANTE) and improved Motion Sensitive Driven Equilibrium (iMSDE) were each combined with a three-dimensional (3D) multi-echo Fast Spoiled GRadient echo (ME-FSPGR) readout. Three different k-space view-order designs: Radial Fan-beam Encoding Ordering (RFEO), Distance-Determined Encoding Ordering (DDEO) and Centric Phase Encoding Order (CPEO) were investigated. The sequences were evaluated through Bloch simulation and in a cohort of twenty volunteers. The vessel wall Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and R 2 *, and the sternocleidomastoid muscle R 2 * were measured and compared. Different numbers of acquisitions-per-shot (APS) were evaluated to further optimise the effectiveness of blood suppression. All sequences resulted in comparable R 2 * measurements to a conventional, i.e. non-blood suppressed sequence in the sternocleidomastoid muscle of the volunteers. Both Bloch simulations and volunteer data showed that DANTE has a higher signal intensity and results in a higher image SNR than iMSDE. Blood suppression efficiency was not significantly different when using different k-space view orders. Smaller APS achieved better blood suppression. The use of blood-suppression preparation methods does not affect the measurement of R 2 *. DANTE prepared ME-FSPGR sequence with a small number of acquisitions-per-shot can provide high quality black-blood R 2 * measurements of the carotid vessel wall. Copyright © 2017 Elsevier Inc. All rights reserved.
Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.
Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir
2014-01-01
Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.
2016-12-01
The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario studied, a range of array formations is found expressing varying preferences for either functional. Further analyses then allow for the identification of trade-offs between the two key societal objectives of energy production and conservation. This in turn produces information valuable to stakeholders and policymakers when making decisions on array design.
Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Larsson, Johan
2013-01-01
A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.
Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line
NASA Astrophysics Data System (ADS)
Timings, Julian P.; Cole, David J.
2012-06-01
A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.
Modelling soil water retention using support vector machines with genetic algorithm optimisation.
Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L
2014-01-01
This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.
Santamaría, Eva; Estévez, Javier Alejandro; Riba, Jordi; Izquierdo, Iñaki; Valle, Marta
2017-01-01
To optimise a pharmacokinetic (PK) study design of rupatadine for 2-5 year olds by using a population PK model developed with data from a study in 6-11 year olds. The design optimisation was driven by the need to avoid children's discomfort in the study. PK data from 6-11 year olds with allergic rhinitis available from a previous study were used to construct a population PK model which we used in simulations to assess the dose to administer in a study in 2-5 year olds. In addition, an optimal design approach was used to determine the most appropriate number of sampling groups, sampling days, total samples and sampling times. A two-compartmental model with first-order absorption and elimination, with clearance dependent on weight adequately described the PK of rupatadine for 6-11 year olds. The dose selected for a trial in 2-5 year olds was 2.5 mg, as it provided a Cmax below the 3 ng/ml threshold. The optimal study design consisted of four groups of children (10 children each), a maximum sampling window of 2 hours in two clinic visits for drawing three samples on day 14 and one on day 28 coinciding with the final examination of the study. A PK study design was optimised in order to prioritise avoidance of discomfort for enrolled 2-5 year olds by taking only four blood samples from each child and minimising the length of hospital stays.
Optimisation of MSW collection routes for minimum fuel consumption using 3D GIS modelling.
Tavares, G; Zsigraiova, Z; Semiao, V; Carvalho, M G
2009-03-01
Collection of municipal solid waste (MSW) may account for more than 70% of the total waste management budget, most of which is for fuel costs. It is therefore crucial to optimise the routing network used for waste collection and transportation. This paper proposes the use of geographical information systems (GIS) 3D route modelling software for waste collection and transportation, which adds one more degree of freedom to the system and allows driving routes to be optimised for minimum fuel consumption. The model takes into account the effects of road inclination and vehicle weight. It is applied to two different cases: routing waste collection vehicles in the city of Praia, the capital of Cape Verde, and routing the transport of waste from different municipalities of Santiago Island to an incineration plant. For the Praia city region, the 3D model that minimised fuel consumption yielded cost savings of 8% as compared with an approach that simply calculated the shortest 3D route. Remarkably, this was true despite the fact that the GIS-recommended fuel reduction route was actually 1.8% longer than the shortest possible travel distance. For the Santiago Island case, the difference was even more significant: a 12% fuel reduction for a similar total travel distance. These figures indicate the importance of considering both the relief of the terrain and fuel consumption in selecting a suitable cost function to optimise vehicle routing.
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Low, Ying Wei Ivan; Blasco, Francesca; Vachaspati, Prakash
2016-09-20
Lipophilicity is one of the molecular properties assessed in early drug discovery. Direct measurement of the octanol-water distribution coefficient (logD) requires an analytical method with a large dynamic range or multistep dilutions, as the analyte's concentrations span across several orders of magnitude. In addition, water/buffer and octanol phases which have very different polarity could lead to matrix effects and affect the LC-MS response, leading to erroneous logD values. Most compound libraries use DMSO stocks as it greatly reduces the sample requirement but the presence of DMSO has been shown to underestimate the lipophilicity of the analyte. The present work describes the development of an optimised shake flask logD method using deepwell 96 well plate that addresses the issues related to matrix effects, DMSO concentration and incubation conditions and is also amenable to high throughput. Our results indicate that the equilibrium can be achieved within 30min by flipping the plate on its side while even 0.5% of DMSO is not tolerated in the assay. This study uses the matched matrix concept to minimise the errors in analysing the two phases namely buffer and octanol in LC-MS. Copyright © 2016 Elsevier B.V. All rights reserved.
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
Optimising fuel treatments over time and space
Woodam Chung; Greg Jones; Kurt Krueger; Jody Bramel; Marco Contreras
2013-01-01
Fuel treatments have been widely used as a tool to reduce catastrophic wildland fire risks in many forests around the world. However, it is a challenging task for forest managers to prioritise where, when and how to implement fuel treatments across a large forest landscape. In this study, an optimisation model was developed for long-term fuel management decisions at a...
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella
2003-03-01
The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dal Bianco, N.; Lot, R.; Matthys, K.
2018-01-01
This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.
Prediction of road traffic death rate using neural networks optimised by genetic algorithm.
Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari
2015-01-01
Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
The seasonal behaviour of carbon fluxes in the Amazon: fusion of FLUXNET data and the ORCHIDEE model
NASA Astrophysics Data System (ADS)
Verbeeck, H.; Peylin, P.; Bacour, C.; Ciais, P.
2009-04-01
Eddy covariance measurements at the Santarém (km 67) site revealed an unexpected seasonal pattern in carbon fluxes which could not be simulated by existing state-of-the-art global ecosystem models (Saleska et al., Sciece 2003). An unexpected high carbon uptake was measured during dry season. In contrast, carbon release was observed in the wet season. There are several possible (combined) underlying mechanisms of this phenomenon: (1) an increased soil respiration due to soil moisture in the wet season, (2) increased photosynthesis during the dry season due to deep rooting, hydraulic lift, increased radiation and/or a leaf flush. The objective of this study is to optimise the ORCHIDEE model using eddy covariance data in order to be able to mimic the seasonal response of carbon fluxes to dry/wet conditions in tropical forest ecosystems. By doing this, we try to identify the underlying mechanisms of this seasonal response. The ORCHIDEE model is a state of the art mechanistic global vegetation model that can be run at local or global scale. It calculates the carbon and water cycle in the different soil and vegetation pools and resolves the diurnal cycle of fluxes. ORCHIDEE is built on the concept of plant functional types (PFT) to describe vegetation. To bring the different carbon pool sizes to realistic values, spin-up runs are used. ORCHIDEE uses climate variables as drivers together with a number of ecosystem parameters that have been assessed from laboratory and in situ experiments. These parameters are still associated with a large uncertainty and may vary between and within PFTs in a way that is currently not informed or captured by the model. Recently, the development of assimilation techniques allows the objective use of eddy covariance data to improve our knowledge of these parameters in a statistically coherent approach. We use a Bayesian optimisation approach. This approach is based on the minimization of a cost function containing the mismatch between simulated model output and observations as well as the mismatch between a priori and optimized parameters. The parameters can be optimized on different time scales (annually, monthly, daily). For this study the model is optimised at local scale for 5 eddy flux sites: 4 sites in Brazil and one in French Guyana. The seasonal behaviour of C fluxes in response to wet and dry conditions differs among these sites. Key processes that are optimised include: the effect of the soil water on heterotrophic soil respiration, the effect of soil water availability on stomatal conductance and photosynthesis, and phenology. By optimising several key parameters we could improve the simulation of the seasonal pattern of NEE significantly. Nevertheless, posterior parameters should be interpreted with care, because resulting parameter values might compensate for uncertainties on the model structure or other parameters. Moreover, several critical issues appeared during this study e.g. how to assimilate latent and sensible heat data, when the energy balance is not closed in the data? Optimisation of the Q10 parameter showed that on some sites respiration was not sensitive at all to temperature, which show only small variations in this region. Considering this, one could question the reliability of the partitioned fluxes (GPP/Reco) at these sites. This study also tests if there is coherence between optimised parameter values of different sites within the tropical forest PFT and if the forward model response to climate variations is similar between sites.
Design of a prototype flow microreactor for synthetic biology in vitro.
Boehm, Christian R; Freemont, Paul S; Ces, Oscar
2013-09-07
As a reference platform for in vitro synthetic biology, we have developed a prototype flow microreactor for enzymatic biosynthesis. We report the design, implementation, and computer-aided optimisation of a three-step model pathway within a microfluidic reactor. A packed bed format was shown to be optimal for enzyme compartmentalisation after experimental evaluation of several approaches. The specific substrate conversion efficiency could significantly be improved by an optimised parameter set obtained by computational modelling. Our microreactor design provides a platform to explore new in vitro synthetic biology solutions for industrial biosynthesis.
Yetilmezsoy, Kaan; Demirel, Sevgi
2008-05-30
A three-layer artificial neural network (ANN) model was developed to predict the efficiency of Pb(II) ions removal from aqueous solution by Antep pistachio (Pistacia Vera L.) shells based on 66 experimental sets obtained in a laboratory batch study. The effect of operational parameters such as adsorbent dosage, initial concentration of Pb(II) ions, initial pH, operating temperature, and contact time were studied to optimise the conditions for maximum removal of Pb(II) ions. On the basis of batch test results, optimal operating conditions were determined to be an initial pH of 5.5, an adsorbent dosage of 1.0 g, an initial Pb(II) concentration of 30 ppm, and a temperature of 30 degrees C. Experimental results showed that a contact time of 45 min was generally sufficient to achieve equilibrium. After backpropagation (BP) training combined with principal component analysis (PCA), the ANN model was able to predict adsorption efficiency with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and a linear transfer function (purelin) at output layer. The Levenberg-Marquardt algorithm (LMA) was found as the best of 11 BP algorithms with a minimum mean squared error (MSE) of 0.000227875. The linear regression between the network outputs and the corresponding targets were proven to be satisfactory with a correlation coefficient of about 0.936 for five model variables used in this study.
Yu Wei; Erin J. Belval; Matthew P. Thompson; Dave E. Calkin; Crystal S. Stonesifer
2016-01-01
Sharing fire engines and crews between fire suppression dispatch zones may help improve the utilisation of fire suppression resources. Using the Resource Ordering and Status System, the Predictive Servicesâ Fire Potential Outlooks and the Rocky Mountain Region Preparedness Levels from 2010 to 2013, we tested a simulation and optimisation procedure to transfer crews and...
Optimisation of a parallel ocean general circulation model
NASA Astrophysics Data System (ADS)
Beare, M. I.; Stevens, D. P.
1997-10-01
This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo
2008-10-29
We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.
Airfoil Shape Optimization based on Surrogate Model
NASA Astrophysics Data System (ADS)
Mukesh, R.; Lingadurai, K.; Selvakumar, U.
2018-02-01
Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.
Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole
2017-01-01
Objective To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Materials and methods Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006–2007. Diet cost was estimated from mean national food prices (2006–2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. Results The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. Conclusions In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d. PMID:28358837
Del Prado, A; Misselbrook, T; Chadwick, D; Hopkins, A; Dewhurst, R J; Davison, P; Butler, A; Schröder, J; Scholefield, D
2011-09-01
Multiple demands are placed on farming systems today. Society, national legislation and market forces seek what could be seen as conflicting outcomes from our agricultural systems, e.g. food quality, affordable prices, a healthy environmental, consideration of animal welfare, biodiversity etc., Many of these demands, or desirable outcomes, are interrelated, so reaching one goal may often compromise another and, importantly, pose a risk to the economic viability of the farm. SIMS(DAIRY), a farm-scale model, was used to explore this complexity for dairy farm systems. SIMS(DAIRY) integrates existing approaches to simulate the effect of interactions between farm management, climate and soil characteristics on losses of nitrogen, phosphorus and carbon. The effects on farm profitability and attributes of biodiversity, milk quality, soil quality and animal welfare are also included. SIMS(DAIRY) can also be used to optimise fertiliser N. In this paper we discuss some limitations and strengths of using SIMS(DAIRY) compared to other modelling approaches and propose some potential improvements. Using the model we evaluated the sustainability of organic dairy systems compared with conventional dairy farms under non-optimised and optimised fertiliser N use. Model outputs showed for example, that organic dairy systems based on grass-clover swards and maize silage resulted in much smaller total GHG emissions per l of milk and slightly smaller losses of NO(3) leaching and NO(x) emissions per l of milk compared with the grassland/maize-based conventional systems. These differences were essentially because the conventional systems rely on indirect energy use for 'fixing' N compared with biological N fixation for the organic systems. SIMS(DAIRY) runs also showed some other potential benefits from the organic systems compared with conventional systems in terms of financial performance and soil quality and biodiversity scores. Optimisation of fertiliser N timings and rates showed a considerable scope to reduce the (GHG emissions per l milk too). Copyright © 2011 Elsevier B.V. All rights reserved.
Discovery and optimisation studies of antimalarial phenotypic hits
Mital, Alka; Murugesan, Dinakaran; Kaiser, Marcel; Yeates, Clive; Gilbert, Ian H.
2015-01-01
There is an urgent need for the development of new antimalarial compounds. As a result of a phenotypic screen, several compounds with potent activity against the parasite Plasmodium falciparum were identified. Characterization of these compounds is discussed, along with approaches to optimise the physicochemical properties. The in vitro antimalarial activity of these compounds against P. falciparum K1 had EC50 values in the range of 0.09–29 μM, and generally good selectivity (typically >100-fold) compared to a mammalian cell line (L6). One example showed no significant activity against a rodent model of malaria, and more work is needed to optimise these compounds. PMID:26408453
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil
2006-06-01
In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.
Optimal control of predator-prey mathematical model with infection and harvesting on prey
NASA Astrophysics Data System (ADS)
Diva Amalia, R. U.; Fatmawati; Windarto; Khusnul Arif, Didik
2018-03-01
This paper presents a predator-prey mathematical model with infection and harvesting on prey. The infection and harvesting only occur on the prey population and it assumed that the prey infection would not infect predator population. We analysed the mathematical model of predator-prey with infection and harvesting in prey. Optimal control, which is a prevention of the prey infection, also applied in the model and denoted as U. The purpose of the control is to increase the susceptible prey. The analytical result showed that the model has five equilibriums, namely the extinction equilibrium (E 0), the infection free and predator extinction equilibrium (E 1), the infection free equilibrium (E 2), the predator extinction equilibrium (E 3), and the coexistence equilibrium (E 4). The extinction equilibrium (E 0) is not stable. The infection free and predator extinction equilibrium (E 1), the infection free equilibrium (E 2), also the predator extinction equilibrium (E 3), are locally asymptotically stable with some certain conditions. The coexistence equilibrium (E 4) tends to be locally asymptotically stable. Afterwards, by using the Maximum Pontryagin Principle, we obtained the existence of optimal control U. From numerical simulation, we can conclude that the control could increase the population of susceptible prey and decrease the infected prey.
Calculation of individual isotope equilibrium constants for implementation in geochemical models
Thorstenson, Donald C.; Parkhurst, David L.
2002-01-01
Theory is derived from the work of Urey to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by , where is n the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example and , and to include the effects of nonideality. The equilibrium constants of the isotope exchange reactions provide a basis for calculating the individual isotope equilibrium constants for the geochemical modeling reactions. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation factors. Equilibrium constants are calculated for all species that can be formed from and selected species containing , in the molecules and the ion pairs with where the subscripts g, aq, l, and s refer to gas, aqueous, liquid, and solid, respectively. These equilibrium constants are used in the geochemical model PHREEQC to produce an equilibrium and reaction-transport model that includes these isotopic species. Methods are presented for calculation of the individual isotope equilibrium constants for the asymmetric bicarbonate ion. An example calculates the equilibrium of multiple isotopes among multiple species and phases.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B
2011-01-01
An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.
Turbulence Modeling Effects on the Prediction of Equilibrium States of Buoyant Shear Flows
NASA Technical Reports Server (NTRS)
Zhao, C. Y.; So, R. M. C.; Gatski, T. B.
2001-01-01
The effects of turbulence modeling on the prediction of equilibrium states of turbulent buoyant shear flows were investigated. The velocity field models used include a two-equation closure, a Reynolds-stress closure assuming two different pressure-strain models and three different dissipation rate tensor models. As for the thermal field closure models, two different pressure-scrambling models and nine different temperature variance dissipation rate, Epsilon(0) equations were considered. The emphasis of this paper is focused on the effects of the Epsilon(0)-equation, of the dissipation rate models, of the pressure-strain models and of the pressure-scrambling models on the prediction of the approach to equilibrium turbulence. Equilibrium turbulence is defined by the time rate (if change of the scaled Reynolds stress anisotropic tensor and heat flux vector becoming zero. These conditions lead to the equilibrium state parameters. Calculations show that the Epsilon(0)-equation has a significant effect on the prediction of the approach to equilibrium turbulence. For a particular Epsilon(0)-equation, all velocity closure models considered give an equilibrium state if anisotropic dissipation is accounted for in one form or another in the dissipation rate tensor or in the Epsilon(0)-equation. It is further found that the models considered for the pressure-strain tensor and the pressure-scrambling vector have little or no effect on the prediction of the approach to equilibrium turbulence.
NASA Astrophysics Data System (ADS)
Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.
2014-05-01
In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.
Wiener-Hammerstein system identification - an evolutionary approach
NASA Astrophysics Data System (ADS)
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
Unsteady Computational Tests of a Non-Equilibrium
NASA Astrophysics Data System (ADS)
Jirasek, Adam; Hamlington, Peter; Lofthouse, Andrew; Usafa Collaboration; Cu Boulder Collaboration
2017-11-01
A non-equilibrium turbulence model is assessed on simulations of three practically-relevant unsteady test cases; oscillating channel flow, transonic flow around an oscillating airfoil, and transonic flow around the Benchmark Super-Critical Wing. The first case is related to piston-driven flows while the remaining cases are relevant to unsteady aerodynamics at high angles of attack and transonic speeds. Non-equilibrium turbulence effects arise in each of these cases in the form of a lag between the mean strain rate and Reynolds stresses, resulting in reduced kinetic energy production compared to classical equilibrium turbulence models that are based on the gradient transport (or Boussinesq) hypothesis. As a result of the improved representation of unsteady flow effects, the non-equilibrium model provides substantially better agreement with available experimental data than do classical equilibrium turbulence models. This suggests that the non-equilibrium model may be ideally suited for simulations of modern high-speed, high angle of attack aerodynamics problems.
Optimisation of process parameters on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.
Optimisation of warpage on thin shell part by using particle swarm optimisation (PSO)
NASA Astrophysics Data System (ADS)
Norshahira, R.; Shayfull, Z.; Nasir, S. M.; Saad, S. M. Sazli; Fathullah, M.
2017-09-01
As the product nowadays moving towards thinner design, causing the production of the plastic product facing a lot of difficulties. This is due to the higher possibilities of defects occur as the thickness of the wall gets thinner. Demand for technique in reducing the defects increasing due to this factor. These defects has seen to be occur due to several factors in injection moulding process. In the study a Moldflow software was used in simulating the injection moulding process. While RSM is used in producing the mathematical model to be used as the input fitness function for the Matlab software. Particle Swarm Optimisation (PSO) technique is used in optimising the processing condition to reduce the amount of shrinkage and warpage of the plastic part. The results shows that there are a warpage reduction of 17.60% in x direction, 18.15% in y direction and 10.25% reduction in z direction respectively. The results shows the reliability of this artificial method in minimising the product warpage.
Optimisation of composite bone plates for ulnar transverse fractures.
Chakladar, N D; Harper, L T; Parsons, A J
2016-04-01
Metallic bone plates are commonly used for arm bone fractures where conservative treatment (casts) cannot provide adequate support and compression at the fracture site. These plates, made of stainless steel or titanium alloys, tend to shield stress transfer at the fracture site and delay the bone healing rate. This study investigates the feasibility of adopting advanced composite materials to overcome stress shielding effects by optimising the geometry and mechanical properties of the plate to match more closely to the bone. An ulnar transverse fracture is characterised and finite element techniques are employed to investigate the feasibility of a composite-plated fractured bone construct over a stainless steel equivalent. Numerical models of intact and fractured bones are analysed and the mechanical behaviour is found to agree with experimental data. The mechanical properties are tailored to produce an optimised composite plate, offering a 25% reduction in length and a 70% reduction in mass. The optimised design may help to reduce stress shielding and increase bone healing rates. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
Natural Erosion of Sandstone as Shape Optimisation.
Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan
2017-12-11
Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.
Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks
NASA Astrophysics Data System (ADS)
Yang, Chao; Fu, Yuli; Yang, Junjie
2016-07-01
Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.
Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat
2012-09-01
Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Optimising resource management in neurorehabilitation.
Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko
2014-01-01
To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
The phenotypic equilibrium of cancer cells: From average-level stability to path-wise convergence.
Niu, Yuanling; Wang, Yue; Zhou, Da
2015-12-07
The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In the previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Critical review of membrane bioreactor models--part 1: biokinetic and filtration models.
Naessens, W; Maere, T; Nopens, I
2012-10-01
Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can significantly benefit from mathematical modelling. In this paper, the vast literature on modelling MBR biokinetics and filtration is critically reviewed. It was found that models cover the wide range of empirical to detailed mechanistic descriptions and have mainly been used for knowledge development and to a lesser extent for system optimisation/control. Moreover, studies are still predominantly performed at lab or pilot scale. Trends are discussed, knowledge gaps identified and interesting routes for further research suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.
Non-equilibrium synergistic effects in atmospheric pressure plasmas.
Guo, Heng; Zhang, Xiao-Ning; Chen, Jian; Li, He-Ping; Ostrikov, Kostya Ken
2018-03-19
Non-equilibrium is one of the important features of an atmospheric gas discharge plasma. It involves complicated physical-chemical processes and plays a key role in various actual plasma processing. In this report, a novel complete non-equilibrium model is developed to reveal the non-equilibrium synergistic effects for the atmospheric-pressure low-temperature plasmas (AP-LTPs). It combines a thermal-chemical non-equilibrium fluid model for the quasi-neutral plasma region and a simplified sheath model for the electrode sheath region. The free-burning argon arc is selected as a model system because both the electrical-thermal-chemical equilibrium and non-equilibrium regions are involved simultaneously in this arc plasma system. The modeling results indicate for the first time that it is the strong and synergistic interactions among the mass, momentum and energy transfer processes that determine the self-consistent non-equilibrium characteristics of the AP-LTPs. An energy transfer process related to the non-uniform spatial distributions of the electron-to-heavy-particle temperature ratio has also been discovered for the first time. It has a significant influence for self-consistently predicting the transition region between the "hot" and "cold" equilibrium regions of an AP-LTP system. The modeling results would provide an instructive guidance for predicting and possibly controlling the non-equilibrium particle-energy transportation process in various AP-LTPs in future.
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Heck, V.
2014-09-01
Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new empirical potential energy function for Ar2
NASA Astrophysics Data System (ADS)
Myatt, Philip T.; Dham, Ashok K.; Chandrasekhar, Pragna; McCourt, Frederick R. W.; Le Roy, Robert J.
2018-06-01
A critical re-analysis of all available spectroscopic and virial coefficient data for Ar2 has been used to determine an improved empirical analytic potential energy function that has been 'tuned' to optimise its agreement with viscosity, diffusion and thermal diffusion data, and whose short-range behaviour is in reasonably good agreement with the most recent ab initio calculations for this system. The recommended Morse/long-range potential function is smooth and differentiable at all distances, and incorporates both the correct theoretically predicted long-range behaviour and the correct limiting short-range functional behaviour. The resulting value of the well depth is ? cm-1 and the associated equilibrium distance is re = 3.766 (±0.002) Å, while the 40Ar s-wave scattering length is -714 Å.
Some comments on thermodynamic consistency for equilibrium mixture equations of state
Grove, John W.
2018-03-28
We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.
NASA Astrophysics Data System (ADS)
Rezrazi, Ahmed; Hanini, Salah; Laidi, Maamar
2016-02-01
The right design and the high efficiency of solar energy systems require accurate information on the availability of solar radiation. Due to the cost of purchase and maintenance of the radiometers, these data are not readily available. Therefore, there is a need to develop alternative ways of generating such data. Artificial neural networks (ANNs) are excellent and effective tools for learning, pinpointing or generalising data regularities, as they have the ability to model nonlinear functions; they can also cope with complex `noisy' data. The main objective of this paper is to show how to reach an optimal model of ANNs for applying in prediction of solar radiation. The measured data of the year 2007 in Ghardaïa city (Algeria) are used to demonstrate the optimisation methodology. The performance evaluation and the comparison of results of ANN models with measured data are made on the basis of mean absolute percentage error (MAPE). It is found that MAPE in the ANN optimal model reaches 1.17 %. Also, this model yields a root mean square error (RMSE) of 14.06 % and an MBE of 0.12. The accuracy of the outputs exceeded 97 % and reached up 99.29 %. Results obtained indicate that the optimisation strategy satisfies practical requirements. It can successfully be generalised for any location in the world and be used in other fields than solar radiation estimation.
Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0
NASA Astrophysics Data System (ADS)
Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; Luke, Catherine M.
2016-08-01
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter.
Gunasundari, Elumalai; Senthil Kumar, Ponnusamy
2017-04-01
This study discusses about the biosorption of Cr(VI) ion from aqueous solution using ultrasonic assisted Spirulina platensis (UASP). The prepared UASP biosorbent was characterised by Fourier transform infrared spectroscopy, X-ray diffraction, Brunauer-Emmet-Teller, scanning electron spectroscopy and energy dispersive X-ray and thermogravimetric analyses. The optimum condition for the maximum removal of Cr(VI) ions for an initial concentration of 50 mg/l by UASP was measured as: adsorbent dose of 1 g/l, pH of 3.0, contact time of 30 min and temperature of 303 K. Adsorption isotherm, kinetics and thermodynamic parameters were calculated. Freundlich model provided the best results for the removal of Cr(VI) ions by UASP. The adsorption kinetics of Cr(VI) ions onto UASP showed that the pseudo-first-order model was well in line with the experimental data. In the thermodynamic study, the parameters like Gibb's free energy, enthalpy and entropy changes were evaluated. This result explains that the adsorption of Cr(VI) ions onto the UASP was exothermic and spontaneous in nature. Desorption of the biosorbent was done using different desorbing agents in which NaOH gave the best result. The prepared material showed higher affinity for the removal of Cr(VI) ions and this may be an alternative material to the existing commercial adsorbents.
Stochastic optimisation of water allocation on a global scale
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.
2014-05-01
Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.
Subsampling for dataset optimisation
NASA Astrophysics Data System (ADS)
Ließ, Mareike
2017-04-01
Soil-landscapes have formed by the interaction of soil-forming factors and pedogenic processes. In modelling these landscapes in their pedodiversity and the underlying processes, a representative unbiased dataset is required. This concerns model input as well as output data. However, very often big datasets are available which are highly heterogeneous and were gathered for various purposes, but not to model a particular process or data space. As a first step, the overall data space and/or landscape section to be modelled needs to be identified including considerations regarding scale and resolution. Then the available dataset needs to be optimised via subsampling to well represent this n-dimensional data space. A couple of well-known sampling designs may be adapted to suit this purpose. The overall approach follows three main strategies: (1) the data space may be condensed and de-correlated by a factor analysis to facilitate the subsampling process. (2) Different methods of pattern recognition serve to structure the n-dimensional data space to be modelled into units which then form the basis for the optimisation of an existing dataset through a sensible selection of samples. Along the way, data units for which there is currently insufficient soil data available may be identified. And (3) random samples from the n-dimensional data space may be replaced by similar samples from the available dataset. While being a presupposition to develop data-driven statistical models, this approach may also help to develop universal process models and identify limitations in existing models.
Non-equilibrium dog-flea model
NASA Astrophysics Data System (ADS)
Ackerson, Bruce J.
2017-11-01
We develop the open dog-flea model to serve as a check of proposed non-equilibrium theories of statistical mechanics. The model is developed in detail. Then it is applied to four recent models for non-equilibrium statistical mechanics. Comparison of the dog-flea solution with these different models allows checking claims and giving a concrete example of the theoretical models.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John BO
2008-01-01
Background We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024–1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581–590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Results Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6°C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, ε of 0.21) and an RMSE of 45.1°C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3°C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5°C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. Conclusion With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors. PMID:18959785
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grove, John W.
We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.
An improved PSO-SVM model for online recognition defects in eddy current testing
NASA Astrophysics Data System (ADS)
Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin
2013-12-01
Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.
Mathematical analysis of tuberculosis transmission model with delay
NASA Astrophysics Data System (ADS)
Lapaan, R. D.; Collera, J. A.; Addawe, J. M.
2016-11-01
In this paper, a delayed Tuberculosis infection model is formulated and investigated. We showed the existence of disease free equilibrium and endemic equilibrium points. We used La Salle-Lyapunov Invariance Principle to show that if the reproductive number R0 < 1, the disease-free equilibrium of the model is globally asymptotically stable. Numerical simulations are then performed to illustrate the existence of the disease free equilibrium and the endemic equilibrium point for a given value of R0. Thus, when R0 < 1, the disease dies out in the population.
Optimisation of strain selection in evolutionary continuous culture
NASA Astrophysics Data System (ADS)
Bayen, T.; Mairet, F.
2017-12-01
In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.
Homayoonfal, Mina; Khodaiyan, Faramarz; Mousavi, Mohammad
2015-05-01
The major purpose of this study is to apply response surface methodology to model and optimise processing conditions for the preparation of beverage emulsions with maximum emulsion stability and viscosity, minimum particle size, turbidity loss rate, size index and peroxide value changes. A three-factor, five-level central composite design was conducted to estimate the effects of three independent variables: ultrasonic time (UT, 5-15 min), walnut-oil content (WO, 4-10% (w/w)) and Span 80 content (S80, 0.55-0.8). The results demonstrated the empirical models were satisfactorily (p < 0.0001) fitted to the experimental data. Evaluation of responses by analysis of variance indicated high coefficient determination values. The overall optimisation of preparation conditions was an UT of 14.630 min, WO content of 8.238% (w/w), and S80 content of 0.782% (w/w). Under this optimum region, responses were found to be 219.198, 99.184, 0.008, 0.008, 2.43 and 16.65 for particle size, emulsion stability, turbidity loss rate, size index, viscosity and peroxide value changes, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Stability analysis of the Euler discretization for SIR epidemic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanto, Agus
2014-06-19
In this paper we consider a discrete SIR epidemic model obtained by the Euler method. For that discrete model, existence of disease free equilibrium and endemic equilibrium is established. Sufficient conditions on the local asymptotical stability of both disease free equilibrium and endemic equilibrium are also derived. It is found that the local asymptotical stability of the existing equilibrium is achieved only for a small time step size h. If h is further increased and passes the critical value, then both equilibriums will lose their stability. Our numerical simulations show that a complex dynamical behavior such as bifurcation or chaosmore » phenomenon will appear for relatively large h. Both analytical and numerical results show that the discrete SIR model has a richer dynamical behavior than its continuous counterpart.« less
Acoustic Resonator Optimisation for Airborne Particle Manipulation
NASA Astrophysics Data System (ADS)
Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian
Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.
Oliveri, Paolo
2017-08-22
Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Sahoo, B K; Sudeep Kumara, K; Karunakara, N; Gaware, J J; Sapra, B K; Mayya, Y S
2017-06-01
Regulating the environmental discharge of 220 Rn (historically known as thoron) and its decay products from thorium processing facilities is important for protection of environment and general public living in the vicinities. Activated charcoal provides an effective solution to this problem because of its high adsorption capacity to gaseous element like radon. In order to design and develop a charcoal based Thoron Mitigation System, a mathematical model has been developed in the present work for studying the 220 Rn transport and adsorption in a flow through charcoal bed and estimating the 220 Rn mitigation factor (MF) as a function of system and operating parameters. The model accounts for inter- and intra-grain diffusion, advection, radioactive decay and adsorption processes. Also, the effects of large void fluctuation and wall channeling on the mitigation factor have been included through a statistical model. Closed form solution has been provided for the MF in terms of adsorption coefficient, system dimensions, grain size, flow rate and void fluctuation exponent. It is shown that the delay effects due to intra grain diffusion plays a significant role thereby rendering external equilibrium assumptions unsuitable. Also, the application of the statistical model clearly demonstrates the transition from the exponential MF to a power-law form and shows how the occurrence of channels with low probability can lower mitigation factor by several orders of magnitude. As a part of aiding design, the model is further extended to optimise the bed dimensions in respect of pressure drop and MF. The application of the results for the design and development of a practically useful charcoal bed is discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Description of the General Equilibrium Model of Ecosystem Services (GEMES)
Travis Warziniack; David Finnoff; Jenny Apriesnig
2017-01-01
This paper serves as documentation for the General Equilibrium Model of Ecosystem Services (GEMES). GEMES is a regional computable general equilibrium model that is composed of values derived from natural capital and ecosystem services. It models households, producing sectors, and governments, linked to one another through commodity and factor markets. GEMES was...
Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina
2010-09-29
To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.
NASA Astrophysics Data System (ADS)
Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson
2018-06-01
Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.
3D printed fluidics with embedded analytic functionality for automated reaction optimisation
Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D
2017-01-01
Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852
Netz, Roland R
2018-05-14
An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium parameter α from experimental spectral response and fluctuation data.
NASA Astrophysics Data System (ADS)
Netz, Roland R.
2018-05-01
An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium parameter α from experimental spectral response and fluctuation data.
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
Equilibrium and kinetic models for colloid release under transient solution chemistry conditions
USDA-ARS?s Scientific Manuscript database
We present continuum models to describe colloid release in the subsurface during transient physicochemical conditions. Our modeling approach relates the amount of colloid release to changes in the fraction of the solid surface area that contributes to retention. Equilibrium, kinetic, equilibrium and...
NON-EQUILIBRIUM HELIUM IONIZATION IN AN MHD SIMULATION OF THE SOLAR ATMOSPHERE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit, E-mail: thomas.golding@astro.uio.no, E-mail: mats.carlsson@astro.uio.no, E-mail: jorrit.leenaarts@astro.su.se
The ionization state of the gas in the dynamic solar chromosphere can depart strongly from the instantaneous statistical equilibrium commonly assumed in numerical modeling. We improve on earlier simulations of the solar atmosphere that only included non-equilibrium hydrogen ionization by performing a 2D radiation-magnetohydrodynamics simulation featuring non-equilibrium ionization of both hydrogen and helium. The simulation includes the effect of hydrogen Lyα and the EUV radiation from the corona on the ionization and heating of the atmosphere. Details on code implementation are given. We obtain helium ion fractions that are far from their equilibrium values. Comparison with models with local thermodynamicmore » equilibrium (LTE) ionization shows that non-equilibrium helium ionization leads to higher temperatures in wavefronts and lower temperatures in the gas between shocks. Assuming LTE ionization results in a thermostat-like behavior with matter accumulating around the temperatures where the LTE ionization fractions change rapidly. Comparison of DEM curves computed from our models shows that non-equilibrium ionization leads to more radiating material in the temperature range 11–18 kK, compared to models with LTE helium ionization. We conclude that non-equilibrium helium ionization is important for the dynamics and thermal structure of the upper chromosphere and transition region. It might also help resolve the problem that intensities of chromospheric lines computed from current models are smaller than those observed.« less
The assumption of equilibrium in models of migration.
Schachter, J; Althaus, P G
1993-02-01
In recent articles Evans (1990) and Harrigan and McGregor (1993) (hereafter HM) scrutinized the equilibrium model of migration presented in a 1989 paper by Schachter and Althaus. This model used standard microeconomics to analyze gross interregional migration flows based on the assumption that gross flows are in approximate equilibrium. HM criticized the model as theoretically untenable, while Evans summoned empirical as well as theoretical objections. HM claimed that equilibrium of gross migration flows could be ruled out on theoretical grounds. They argued that the absence of net migration requires that either all regions have equal populations or that unsustainable regional migration propensities must obtain. In fact some moves are inter- and other are intraregional. It does not follow, however, that the number of interregional migrants will be larger for the more populous region. Alternatively, a country could be divided into a large number of small regions that have equal populations. With uniform propensities to move, each of these analytical regions would experience in equilibrium zero net migration. Hence, the condition that net migration equal zero is entirely consistent with unequal distributions of population across regions. The criticisms of Evans were based both on flawed reasoning and on misinterpretation of the results of a number of econometric studies. His reasoning assumed that the existence of demand shifts as found by Goldfarb and Yezer (1987) and Topel (1986) invalidated the equilibrium model. The equilibrium never really obtains exactly, but economic modeling of migration properly begins with a simple equilibrium model of the system. A careful reading of the papers Evans cited in support of his position showed that in fact they affirmed rather than denied the appropriateness of equilibrium modeling. Zero net migration together with nonzero gross migration are not theoretically incompatible with regional heterogeneity of population, wages, or amenities.
Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.
Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K
2012-06-01
Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
Experimental testing of olivine-melt equilibrium models at high temperatures
NASA Astrophysics Data System (ADS)
Krasheninnikov, S. P.; Sobolev, A. V.; Batanova, V. G.; Kargaltsev, A. A.; Borisov, A. A.
2017-08-01
Data are presented on the equilibrium compositions of olivine and melts in the products of 101 experiments performed at 1300-1600°C, atmospheric pressure, and controlled oxygen fugacity by means of new equipment at the Vernadsky Institute. It was shown that the available models of the olivine-melt equilibrium describe with insufficient adequacy the natural systems at temperatures over 1400°C. The most adequate is the model by Ford et al. (1983). However, this model overestimates systematically the equilibrium temperature with underestimating by 20-40°C at 1450-1600°C. These data point to the need for developing a new, improved quantitative model of the olivine-melt equilibrium for high-temperature magnesian melts, as well as to the possibility of these studies on the basis of the equipment presented.
Selecting a climate model subset to optimise key ensemble properties
NASA Astrophysics Data System (ADS)
Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.
2018-02-01
End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, M.; Rojas, R.
2012-04-01
Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm Optimisation (IPSO), Fully Informed Particle Swarm (FIPS), and weighted FIPS (wFIPS). Finally, an advanced sensitivity analysis using the Latin Hypercube One-At-a-Time (LH-OAT) method and user-friendly plotting summaries facilitate the interpretation and assessment of the calibration/optimisation results. We validate hydroPSO against the standard PSO algorithm (SPSO-2007) employing five test functions commonly used to assess the performance of optimisation algorithms. Additionally, we illustrate how the performance of the optimization/calibration engine is boosted by using several of the fine-tune options included in hydroPSO. Finally, we show how to interface SWAT-2005 with hydroPSO to calibrate a semi-distributed hydrological model for the Ega River basin in Spain, and how to interface MODFLOW-2000 and hydroPSO to calibrate a groundwater flow model for the regional aquifer of the Pampa del Tamarugal in Chile. We limit the applications of hydroPSO to study cases dealing with surface water and groundwater models as these two are the authors' areas of expertise. However, based on the flexibility of hydroPSO we believe this package can be implemented to any model code requiring some form of parameter estimation.
MINTEQA2 is a equilibrium speciation model that can be used to calculate the equilibrium composition of dilute aqueous solutions in the laboratory or in natural aqueous systems. The model is useful for calculating the equilibrium mass distribution among dissolved species, adsorb...
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.
NASA Astrophysics Data System (ADS)
Fu, Shihua; Li, Haitao; Zhao, Guodong
2018-05-01
This paper investigates the evolutionary dynamic and strategy optimisation for a kind of networked evolutionary games whose strategy updating rules incorporate 'bankruptcy' mechanism, and the situation that each player's bankruptcy is due to the previous continuous low profits gaining from the game is considered. First, by using semi-tensor product of matrices method, the evolutionary dynamic of this kind of games is expressed as a higher order logical dynamic system and then converted into its algebraic form, based on which, the evolutionary dynamic of the given games can be discussed. Second, the strategy optimisation problem is investigated, and some free-type control sequences are designed to maximise the total payoff of the whole game. Finally, an illustrative example is given to show that our new results are very effective.
NASA Astrophysics Data System (ADS)
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
NASA Astrophysics Data System (ADS)
Chaczykowski, Maciej
2016-06-01
Basic organic Rankine cycle (ORC), and two variants of regenerative ORC have been considered for the recovery of exhaust heat from natural gas compressor station. The modelling framework for ORC systems has been presented and the optimisation of the systems was carried out with turbine power output as the variable to be maximized. The determination of ORC system design parameters was accomplished by means of the genetic algorithm. The study was aimed at estimating the thermodynamic potential of different ORC configurations with several working fluids employed. The first part of this paper describes the ORC equipment models which are employed to build a NLP formulation to tackle design problems representative for waste energy recovery on gas turbines driving natural gas pipeline compressors.
NASA Astrophysics Data System (ADS)
Poikselkä, Katja; Leinonen, Mikko; Palosaari, Jaakko; Vallivaara, Ilari; Röning, Juha; Juuti, Jari
2017-09-01
This paper introduces a new type of piezoelectric actuator, Mikbal. The Mikbal was developed from a Cymbal by adding steel structures around the steel cap to increase displacement and reduce the amount of piezoelectric material used. Here the parameters of the steel cap of Mikbal and Cymbal actuators were optimised by using genetic algorithms in combination with Comsol Multiphysics FEM modelling software. The blocking force of the actuator was maximised for different values of displacement by optimising the height and the top diameter of the end cap profile so that their effect on displacement, blocking force and stresses could be analysed. The optimisation process was done for five Mikbal- and two Cymbal-type actuators with different diameters varying between 15 and 40 mm. A Mikbal with a Ø 25 mm piezoceramic disc and a Ø 40 mm steel end cap was produced and the performances of unclamped measured and modelled cases were found to correspond within 2.8% accuracy. With a piezoelectric disc of Ø 25 mm, the Mikbal created 72% greater displacement while blocking force was decreased 57% compared with a Cymbal with the same size disc. Even with a Ø 20 mm piezoelectric disc, the Mikbal was able to generate ∼10% higher displacement than a Ø 25 mm Cymbal. Thus, the introduced Mikbal structure presents a way to extend the displacement capabilities of a conventional Cymbal actuator for low-to-moderate force applications.
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
NASA Astrophysics Data System (ADS)
McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten
2016-05-01
In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.
ERIC Educational Resources Information Center
Chiu, Mei-Hung; Chou, Chin-Cheng; Liu, Chia-Ju
2002-01-01
Investigates students' mental models of chemical equilibrium using dynamic science assessments. Reports that students at various levels have misconceptions about chemical equilibrium. Involves 10th grade students (n=30) in the study doing a series of hands-on chemical experiments. Focuses on the process of constructing mental models, dynamic…
2014-01-01
Background Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Methods Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Results Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Conclusions Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials. PMID:24886627
Galli, Leandro; Knight, Rosemary; Robertson, Steven; Hoile, Elizabeth; Oladapo, Olubukola; Francis, David; Free, Caroline
2014-05-22
Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials.
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca
2015-10-31
To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters (or closwre models) for an initial thermai non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two CFD codes currently being used at Glenn Research Center (GRC) for Stirling engine modeling are Fluent and CFD-ACE. The porous-media models available in each of these codes are equilibrium models, which assmne that the solid matrix and the fluid are in thermal equilibrium at each spatial location within the porous medium. This is believed to be a poor assumption for the oscillating-flow environment within Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, we non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location end time during the cycle. A NASA regenerator research grant has been providing experimental and computational results to support definition of various empirical coefficients needed in defining a noa-equilibrium, macroscopic, porous-media model (i.e., to define "closure" relations). The grant effort is being led by Cleveland State University, with subcontractor assistance from the University of Minnesota, Gedeon Associates, and Sunpower, Inc. Friction-factor and heat-transfer correlations based on data taken with the NASAlSunpower oscillating-flow test rig also provide experimentally based correlations that are useful in defining parameters for the porous-media model; these correlations are documented in Gedeon Associates' Sage Stirling-Code Manuals. These sources of experimentally based information were used to define the following terms and parameters needed in the non-equilibrium porous-media model: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity (including themal dispersion and estimate of tortuosity effects}, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity (including the effect of tortuosity) was also estimated. Determination of the porous-media model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Convertor (TDC), which uses a random-fiber regenerator matrix. The non-equilibrium porous-media model presented is considered to be an initial, or "draft," model for possible incorporation in commercial CFD codes, with the expectation that the empirical parameters will likely need to be updated once resulting Stirling CFD model regenerator and engine results have been analyzed. The emphasis of the paper is on use of available data to define empirical parameters (and closure models) needed in a thermal non-equilibrium porous-media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates. However, it is anticipated that a thermal non-equilibrium model such as that presented here, when iacorporated in the CFD codes, will improve our ability to accurately model Stirling regenerators with CFD relative to current thermal-equilibrium porous-media models.
Pal, Saikat; Lindsey, Derek P.; Besier, Thor F.; Beaupre, Gary S.
2013-01-01
Cartilage material properties provide important insights into joint health, and cartilage material models are used in whole-joint finite element models. Although the biphasic model representing experimental creep indentation tests is commonly used to characterize cartilage, cartilage short-term response to loading is generally not characterized using the biphasic model. The purpose of this study was to determine the short-term and equilibrium material properties of human patella cartilage using a viscoelastic model representation of creep indentation tests. We performed 24 experimental creep indentation tests from 14 human patellar specimens ranging in age from 20 to 90 years (median age 61 years). We used a finite element model to reproduce the experimental tests and determined cartilage material properties from viscoelastic and biphasic representations of cartilage. The viscoelastic model consistently provided excellent representation of the short-term and equilibrium creep displacements. We determined initial elastic modulus, equilibrium elastic modulus, and equilibrium Poisson’s ratio using the viscoelastic model. The viscoelastic model can represent the short-term and equilibrium response of cartilage and may easily be implemented in whole-joint finite element models. PMID:23027200
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
Lu, Chunxia; Luo, Xiaoling; Lu, Liliang; Li, Hongmin; Chen, Xia; Ji, Yong
2013-03-01
In recent years, ionic liquids have become increasingly attractive as 'green solvents' used in the extraction of bioactive compounds from natural plant. However, the separation of ionic liquid from the target compounds was difficult, due to their low vapour pressure and high stabilities. In our study, ionic liquid-based ultrasonic and microwave-assisted extraction was used to obtain the crude tannins, then the macroporous resin adsorption technology was further employed to purify the tannins and remove the ionic liquid from crude extract. The results showed that XDA-6 had higher separation efficiency than other tested resins, and the equilibrium experimental data were well fitted to Langmuir isotherms. Dynamic adsorption and desorption were performed on XDA-6 packed in glass columns to optimise the separation process. The optimum conditions as follows: the ratio of column height to diameter bed was 1:8, flow rate 1 BV/h (bed volume per hour), 85% ethanol was used as eluant while the elution volume was 2 BV. Under the optimised conditions, the adsorption and desoption rate of tannins in XDA-6 were 94.81 and 91.63%, respectively. The content of tannins was increased from 70.24% in Galla chinensis extract to 85.12% with a recovery of 99.06%. The result of ultra-performance liquid chromatography (UPLC)-MS/MS analysis showed that [bmim]Br could be removed from extract. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cunniffe, Nik J; Gilligan, Christopher A
2011-06-07
We develop and analyse a flexible compartmental model of the interaction between a plant host, a soil-borne pathogen and a microbial antagonist, for use in optimising biological control. By extracting invasion and persistence thresholds of host, pathogen and biological control agent, performing an equilibrium analysis, and numerical investigation of sensitivity to parameters and initial conditions, we determine criteria for successful biological control. We identify conditions for biological control (i) to prevent a pathogen entering a system, (ii) to eradicate a pathogen that is already present and, if that is not possible, (iii) to reduce the density of the pathogen. Control depends upon the epidemiology of the pathogen and how efficiently the antagonist can colonise particular habitats (i.e. healthy tissue, infected tissue and/or soil-borne inoculum). A sharp transition between totally effective control (i.e. eradication of the pathogen) and totally ineffective control can follow slight changes in biologically interpretable parameters or to the initial amounts of pathogen and biological control agent present. Effective biological control requires careful matching of antagonists to pathosystems. For preventative/eradicative control, antagonists must colonise susceptible hosts. However, for reduction in disease prevalence, the range of habitat is less important than the antagonist's bulking-up efficiency. Copyright © 2011 Elsevier Ltd. All rights reserved.
Anoop Krishnan, K; Sreejalekshmi, K G; Vimexen, V; Dev, Vinu V
2016-02-01
The prospective application of sulphurised activated carbon (SAC) as an ecofriendly and cost-effective adsorbent for Zinc(II) removal from aqueous phase is evaluated, with an emphasis on kinetic and isotherm aspects. SAC was prepared from sugarcane bagasse pith obtained from local juice shops in Sree Bhadrakali Devi Temple located at Ooruttukala, Neyyattinkara, Trivandrum, India during annual festive seasons. Activated carbon modified with sulphur containing ligands was opted as the adsorbent to leverage on the affinity of Zn(II) for sulphur. We report batch-adsorption experiments for parameter optimisations aiming at maximum removal of Zn(II) from liquid-phase using SAC. Adsorption of Zn(II) onto SAC was maximum at pH 6.5. For initial concentrations of 25 and 100mgL(-1), maximum of 12.3mgg(-1) (98.2%) and 23.7mgg(-1) (94.8%) of Zn(II) was adsorbed onto SAC at pH 6.5. Kinetic and equilibrium data were best described by pseudo-second-order and Langmuir models, respectively. A maximum adsorption capacity of 147mgg(-1) was obtained for the adsorption of Zn(II) onto SAC from aqueous solutions. The reusability of the spent adsorbent was also determined. Copyright © 2015 Elsevier Inc. All rights reserved.
Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0
Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; ...
2016-08-25
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate–carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimationmore » system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model–data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. Furthermore, the new improved parameters for JULES are presented along with the associated uncertainties for each parameter.« less
Equilibrium of Global Amphibian Species Distributions with Climate
Munguía, Mariana; Rahbek, Carsten; Rangel, Thiago F.; Diniz-Filho, Jose Alexandre F.; Araújo, Miguel B.
2012-01-01
A common assumption in bioclimatic envelope modeling is that species distributions are in equilibrium with contemporary climate. A number of studies have measured departures from equilibrium in species distributions in particular regions, but such investigations were never carried out for a complete lineage across its entire distribution. We measure departures of equilibrium with contemporary climate for the distributions of the world amphibian species. Specifically, we fitted bioclimatic envelopes for 5544 species using three presence-only models. We then measured the proportion of the modeled envelope that is currently occupied by the species, as a metric of equilibrium of species distributions with climate. The assumption was that the greater the difference between modeled bioclimatic envelope and the occupied distribution, the greater the likelihood that species distribution would not be at equilibrium with contemporary climate. On average, amphibians occupied 30% to 57% of their potential distributions. Although patterns differed across regions, there were no significant differences among lineages. Species in the Neotropic, Afrotropics, Indo-Malay, and Palaearctic occupied a smaller proportion of their potential distributions than species in the Nearctic, Madagascar, and Australasia. We acknowledge that our models underestimate non equilibrium, and discuss potential reasons for the observed patterns. From a modeling perspective our results support the view that at global scale bioclimatic envelope models might perform similarly across lineages but differently across regions. PMID:22511938
Material model of pelvic bone based on modal analysis: a study on the composite bone.
Henyš, Petr; Čapek, Lukáš
2017-02-01
Digital models based on finite element (FE) analysis are widely used in orthopaedics to predict the stress or strain in the bone due to bone-implant interaction. The usability of the model depends strongly on the bone material description. The material model that is most commonly used is based on a constant Young's modulus or on the apparent density of bone obtained from computer tomography (CT) data. The Young's modulus of bone is described in many experimental works with large variations in the results. The concept of measuring and validating the material model of the pelvic bone based on modal analysis is introduced in this pilot study. The modal frequencies, damping, and shapes of the composite bone were measured precisely by an impact hammer at 239 points. An FE model was built using the data pertaining to the geometry and apparent density obtained from the CT of the composite bone. The isotropic homogeneous Young's modulus and Poisson's ratio of the cortical and trabecular bone were estimated from the optimisation procedure including Gaussian statistical properties. The performance of the updated model was investigated through the sensitivity analysis of the natural frequencies with respect to the material parameters. The maximal error between the numerical and experimental natural frequencies of the bone reached 1.74 % in the first modal shape. Finally, the optimised parameters were matched with the data sheets of the composite bone. The maximal difference between the calibrated material properties and that obtained from the data sheet was 34 %. The optimisation scheme of the FE model based on the modal analysis data provides extremely useful calibration of the FE models with the uncertainty bounds and without the influence of the boundary conditions.
Spread of Ebola disease with susceptible exposed infected isolated recovered (SEIIhR) model
NASA Astrophysics Data System (ADS)
Azizah, Afina; Widyaningsih, Purnami; Retno Sari Saputro, Dewi
2017-06-01
Ebola is a deadly infectious disease and has caused an epidemic on several countries in West Africa. Mathematical modeling to study the spread of Ebola disease has been developed, including through models susceptible infected removed (SIR) and susceptible exposed infected removed (SEIR). Furthermore, susceptible exposed infected isolated recovered (SEIIhR) model has been derived. The aims of this research are to derive SEIIhR model for Ebola disease, to determine the patterns of its spread, to determine the equilibrium point and stability of the equilibrium point using phase plane analysis, and also to apply the SEIIhR model on Ebola epidemic in Sierra Leone in 2014. The SEIIhR model is a differential equation system. Pattern of ebola disease spread with SEIIhR model is solution of the differential equation system. The equilibrium point of SEIIhR model is unique and it is a disease-free equilibrium point that stable. Application of the model is based on the data Ebola epidemic in Sierra Leone. The free-disease equilibrium point (Se; Ee; Ie; Ihe; Re )=(5743865, 0, 0, 0, 0) is stable.
Wall ablation of heated compound-materials into non-equilibrium discharge plasmas
NASA Astrophysics Data System (ADS)
Wang, Weizong; Kong, Linghan; Geng, Jinyue; Wei, Fuzhi; Xia, Guangqing
2017-02-01
The discharge properties of the plasma bulk flow near the surface of heated compound-materials strongly affects the kinetic layer parameters modeled and manifested in the Knudsen layer. This paper extends the widely used two-layer kinetic ablation model to the ablation controlled non-equilibrium discharge due to the fact that the local thermodynamic equilibrium (LTE) approximation is often violated as a result of the interaction between the plasma and solid walls. Modifications to the governing set of equations, to account for this effect, are derived and presented by assuming that the temperature of the electrons deviates from that of the heavy particles. The ablation characteristics of one typical material, polytetrafluoroethylene (PTFE) are calculated with this improved model. The internal degrees of freedom as well as the average particle mass and specific heat ratio of the polyatomic vapor, which strongly depends on the temperature, pressure and plasma non-equilibrium degree and plays a crucial role in the accurate determination of the ablation behavior by this model, are also taken into account. Our assessment showed the significance of including such modifications related to the non-equilibrium effect in the study of vaporization of heated compound materials in ablation controlled arcs. Additionally, a two-temperature magneto-hydrodynamic (MHD) model accounting for the thermal non-equilibrium occurring near the wall surface is developed and applied into an ablation-dominated discharge for an electro-thermal chemical launch device. Special attention is paid to the interaction between the non-equilibrium plasma and the solid propellant surface. Both the mass exchange process caused by the wall ablation and plasma species deposition as well as the associated momentum and energy exchange processes are taken into account. A detailed comparison of the results of the non-equilibrium model with those of an equilibrium model is presented. The non-equilibrium results show a non-equilibrium region near the plasma-wall interaction region and this indicates the need for the consideration of the influence of the possible departure from LTE in the plasma bulk on the determination of ablation rate.
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
NASA Astrophysics Data System (ADS)
Zularisam, A. W.; Wahida, Norul
2017-07-01
Nickel (II) is one of the most toxic contaminants recognised as a carcinogenic and mutagenic agent which needs complete removal from wastewater before disposal. In the present study, a novel adsorbent called mesoparticle graphene sand composite (MGSCaps) was synthesised from arenga palm sugar and sand by using a green, simple, low cost and efficient methodology. Subsequently, this composite was characterised and identified using field emission scanning electron microscope (FESEM), x-ray diffraction (XRD) and elemental mapping (EM). The adsorption process was investigated and optimised under the experimental parameters such as pH, contact time and bed depth. The results showed that the interaction between nickel (II) and MGSCaps was not ion to ion interaction hence removal of Ni (II) can be applied at any pH. The results were also exhibited the higher contact time and bed depth, the higher removal percentage of nickel (II) occurred. Adsorption kinetic data were modelled using Pseudo-first-order and Pseudo-second-order equation models. The experimental results indicated pseudo-second-order kinetic equation was most suitable to describe the experimental adsorption kinetics data with maximum capacity of 40% nickel (II) removal for the first hour. The equilibrium adsorption data was fitted with Langmuir, and Freundlich isotherms equations. The data suggested that the most fitted equation model is the Freundlich with correlation R2=0.9974. Based on the obtained results, it can be stated that the adsorption method using MGSCaps is an efficient, facile and reliable method for the removal of nickel (II) from waste water.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan
2017-08-01
Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.
Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter
2007-01-01
A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Parkhill, John; Head-Gordon, Martin
2018-03-01
We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.
Modulation aware cluster size optimisation in wireless sensor networks
NASA Astrophysics Data System (ADS)
Sriram Naik, M.; Kumar, Vinay
2017-07-01
Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.
Warpage analysis on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
The optimisation of moulding parameters appropriate to reduce warpage defects produce using Autodesk Moldflow Insight (AMI) 2012 software The product is injected by using Acrylonitrile-Butadiene-Styrene (ABS) materials. This analysis has processing parameter that varies in melting temperature, mould temperature, packing pressure and packing time. Design of Experiments (DOE) has been integrated to obtain a polynomial model using Response Surface Methodology (RSM). The Glowworm Swarm Optimisation (GSO) method is used to predict a best combination parameters to minimise warpage defect in order to produce high quality parts.
Models of supply function equilibrium with applications to the electricity industry
NASA Astrophysics Data System (ADS)
Aromi, J. Daniel
Electricity market design requires tools that result in a better understanding of incentives of generators and consumers. Chapter 1 and 2 provide tools and applications of these tools to analyze incentive problems in electricity markets. In chapter 1, models of supply function equilibrium (SFE) with asymmetric bidders are studied. I prove the existence and uniqueness of equilibrium in an asymmetric SFE model. In addition, I propose a simple algorithm to calculate numerically the unique equilibrium. As an application, a model of investment decisions is considered that uses the asymmetric SFE as an input. In this model, firms can invest in different technologies, each characterized by distinct variable and fixed costs. In chapter 2, option contracts are introduced to a supply function equilibrium (SFE) model. The uniqueness of the equilibrium in the spot market is established. Comparative statics results on the effect of option contracts on the equilibrium price are presented. A multi-stage game where option contracts are traded before the spot market stage is considered. When contracts are optimally procured by a central authority, the selected profile of option contracts is such that the spot market price equals marginal cost for any load level resulting in a significant reduction in cost. If load serving entities (LSEs) are price takers, in equilibrium, there is no trade of option contracts. Even when LSEs have market power, the central authority's solution cannot be implemented in equilibrium. In chapter 3, we consider a game in which a buyer must repeatedly procure an input from a set of firms. In our model, the buyer is able to sign long term contracts that establish the likelihood with which the next period contract is awarded to an entrant or the incumbent. We find that the buyer finds it optimal to favor the incumbent, this generates more intense competition between suppliers. In a two period model we are able to completely characterize the optimal mechanism.
NASA Astrophysics Data System (ADS)
Ahmed, E.; El-Sayed, A. M. A.; El-Saka, H. A. A.
2007-01-01
In this paper we are concerned with the fractional-order predator-prey model and the fractional-order rabies model. Existence and uniqueness of solutions are proved. The stability of equilibrium points are studied. Numerical solutions of these models are given. An example is given where the equilibrium point is a centre for the integer order system but locally asymptotically stable for its fractional-order counterpart.
Magnetospheric equilibrium configurations and slow adiabatic convection
NASA Technical Reports Server (NTRS)
Voigt, Gerd-Hannes
1986-01-01
This review paper demonstrates how the magnetohydrostatic equilibrium (MHE) theory can be used to describe the large-scale magnetic field configuration of the magnetosphere and its time evolution under the influence of magnetospheric convection. The equilibrium problem is reviewed, and levels of B-field modelling are examined for vacuum models, quasi-static equilibrium models, and MHD models. Results from two-dimensional MHE theory as they apply to the Grad-Shafranov equation, linear equilibria, the asymptotic theory, magnetospheric convection and the substorm mechanism, and plasma anisotropies are addressed. Results from three-dimensional MHE theory are considered as they apply to an intermediate analytical magnetospheric model, magnetotail configurations, and magnetopause boundary conditions and the influence of the IMF.
Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.
Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less
NASA Astrophysics Data System (ADS)
Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
NASA Astrophysics Data System (ADS)
Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei
2015-10-01
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.
Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems
Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.; ...
2016-10-03
Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less
Examples of equilibrium and non-equilibrium behavior in evolutionary systems
NASA Astrophysics Data System (ADS)
Soulier, Arne
With this thesis, we want to shed some light into the darkness of our understanding of simply defined statistical mechanics systems and the surprisingly complex dynamical behavior they exhibit. We will do so by presenting in turn one equilibrium and then one non-equilibrium system with evolutionary dynamics. In part 1, we will present the seceder-model, a newly developed system that cannot equilibrate. We will then study several properties of the system and obtain an idea of the richness of the dynamics of the seceder model, which is particular impressive given the minimal amount of modeling necessary in its setup. In part 2, we will present extensions to the directed polymer in random media problem on a hypercube and its connection to the Eigen model of evolution. Our main interest will be the influence of time-dependent and time-independent changes in the fitness landscape viewed by an evolving population. This part contains the equilibrium dynamics. The stochastic models and the topic of evolution and non-equilibrium in general will allow us to point out similarities to the various lines of thought in game theory.
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Shukla, Anupam
2018-03-01
Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.
Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels
NASA Astrophysics Data System (ADS)
Vandersickel, A.; Wright, Y. M.; Boulouchos, K.
2013-12-01
Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.
Dynamical analysis of a fractional SIR model with birth and death on heterogeneous complex networks
NASA Astrophysics Data System (ADS)
Huo, Jingjing; Zhao, Hongyong
2016-04-01
In this paper, a fractional SIR model with birth and death rates on heterogeneous complex networks is proposed. Firstly, we obtain a threshold value R0 based on the existence of endemic equilibrium point E∗, which completely determines the dynamics of the model. Secondly, by using Lyapunov function and Kirchhoff's matrix tree theorem, the globally asymptotical stability of the disease-free equilibrium point E0 and the endemic equilibrium point E∗ of the model are investigated. That is, when R0 < 1, the disease-free equilibrium point E0 is globally asymptotically stable and the disease always dies out; when R0 > 1, the disease-free equilibrium point E0 becomes unstable and in the meantime there exists a unique endemic equilibrium point E∗, which is globally asymptotically stable and the disease is uniformly persistent. Finally, the effects of various immunization schemes are studied and compared. Numerical simulations are given to demonstrate the main results.
Garnier, Cédric; Mounier, Stéphane; Benaïm, Jean Yves
2004-10-01
Natural organic matter (NOM) behaviour towards proton is an important parameter to understand NOM fate in the environment. Moreover, it is necessary to determine NOM acid-base properties before investigating trace metals complexation by natural organic matter. This work focuses on the possibility to determine these acid-base properties by accurate and simple titrations, even at low organic matter concentrations. So, the experiments were conducted on concentrated and diluted solutions of extracted humic and fulvic acid from Laurentian River, on concentrated and diluted model solutions of well-known simple molecules (acetic and phenolic acids), and on natural samples from the Seine river (France) which are not pre-concentrated. Titration experiments were modelled by a 6 acidic-sites discrete model, except for the model solutions. The modelling software used, called PROSECE (Programme d'Optimisation et de SpEciation Chimique dans l'Environnement), has been developed in our laboratory, is based on the mass balance equilibrium resolution. The results obtained on extracted organic matter and model solutions point out a threshold value for a confident determination of the studied organic matter acid-base properties. They also show an aberrant decreasing carboxylic/phenolic ratio with increasing sample dilution. This shift is neither due to any conformational effect, since it is also observed on model solutions, nor to ionic strength variations which is controlled during all experiments. On the other hand, it could be the result of an electrode troubleshooting occurring at basic pH values, which effect is amplified at low total concentration of acidic sites. So, in our conditions, the limit for a correct modelling of NOM acid-base properties is defined as 0.04 meq of total analysed acidic sites concentration. As for the analysed natural samples, due to their high acidic sites content, it is possible to model their behaviour despite the low organic carbon concentration.
Particle orbits in two-dimensional equilibrium models for the magnetotail
NASA Technical Reports Server (NTRS)
Karimabadi, H.; Pritchett, P. L.; Coroniti, F. V.
1990-01-01
Assuming that there exist an equilibrium state for the magnetotail, particle orbits are investigated in two-dimensional kinetic equilibrium models for the magnetotail. Particle orbits in the equilibrium field are compared with those calculated earlier with one-dimensional models, where the main component of the magnetic field (Bx) was approximated as either a hyperbolic tangent or a linear function of z with the normal field (Bz) assumed to be a constant. It was found that the particle orbits calculated with the two types of models are significantly different, mainly due to the neglect of the variation of Bx with x in the one-dimensional fields.
Optimisation of Critical Infrastructure Protection: The SiVe Project on Airport Security
NASA Astrophysics Data System (ADS)
Breiing, Marcus; Cole, Mara; D'Avanzo, John; Geiger, Gebhard; Goldner, Sascha; Kuhlmann, Andreas; Lorenz, Claudia; Papproth, Alf; Petzel, Erhard; Schwetje, Oliver
This paper outlines the scientific goals, ongoing work and first results of the SiVe research project on critical infrastructure security. The methodology is generic while pilot studies are chosen from airport security. The outline proceeds in three major steps, (1) building a threat scenario, (2) development of simulation models as scenario refinements, and (3) assessment of alternatives. Advanced techniques of systems analysis and simulation are employed to model relevant airport structures and processes as well as offences. Computer experiments are carried out to compare and optimise alternative solutions. The optimality analyses draw on approaches to quantitative risk assessment recently developed in the operational sciences. To exploit the advantages of the various techniques, an integrated simulation workbench is build up in the project.
Vogt, Winnie
2014-01-01
Milrinone is the drug of choice for the treatment and prevention of low cardiac output syndrome (LCOS) in paediatric patients after open heart surgery across Europe. Discrepancies, however, among prescribing guidance, clinical studies and practice pattern require clarification to ensure safe and effective prescribing. However, the clearance prediction equations derived from classical pharmacokinetic modelling provide limited support as they have recently failed a clinical practice evaluation. Therefore, the objective of this study was to evaluate current milrinone dosing using physiology-based pharmacokinetic (PBPK) modelling and simulation to complement the existing pharmacokinetic knowledge and propose optimised dosing regimens as a basis for improving the standard of care for paediatric patients. A PBPK drug-disease model using a population approach was developed in three steps from healthy young adults to adult patients and paediatric patients with and without LCOS after open heart surgery. Pre- and postoperative organ function values from adult and paediatric patients were collected from literature and integrated into a disease model as factorial changes from the reference values in healthy adults aged 20-40 years. The disease model was combined with the PBPK drug model and evaluated against existing pharmacokinetic data. Model robustness was assessed by parametric sensitivity analysis. In the next step, virtual patient populations were created, each with 1,000 subjects reflecting the average adult and paediatric patient characteristics with regard to age, sex, bodyweight and height. They were integrated into the PBPK drug-disease model to evaluate the effectiveness of current milrinone dosing in achieving the therapeutic target range of 100-300 ng/mL milrinone in plasma. Optimised dosing regimens were subsequently developed. The pharmacokinetics of milrinone in healthy young adults as well as adult and paediatric patients were accurately described with an average fold error of 1.1 ± 0.1 (mean ± standard deviation) and mean relative deviation of 1.5 ± 0.3 as measures of bias and precision, respectively. Normalised maximum sensitivity coefficients for model input parameters ranged from -0.84 to 0.71, which indicated model robustness. The evaluation of milrinone dosing across different paediatric age groups showed a non-linear age dependence of total plasma clearance and exposure differences of a factor 1.4 between patients with and without LCOS for a fixed dosing regimen. None of the currently used dosing regimens for milrinone achieved the therapeutic target range across all paediatric age groups and adult patients, so optimised dosing regimens were developed that considered the age-dependent and pathophysiological differences. The PBPK drug-disease model for milrinone in paediatric patients with and without LCOS after open heart surgery highlights that age, disease and surgery differently impact the pharmacokinetics of milrinone, and that current milrinone dosing for LCOS is suboptimal to maintain the therapeutic target range across the entire paediatric age range. Thus, optimised dosing strategies are proposed to ensure safe and effective prescribing.
IS THE SIZE DISTRIBUTION OF URBAN AEROSOLS DETERMINED BY THERMODYNAMIC EQUILIBRIUM? (R826371C005)
A size-resolved equilibrium model, SELIQUID, is presented and used to simulate the size–composition distribution of semi-volatile inorganic aerosol in an urban environment. The model uses the efflorescence branch of aerosol behavior to predict the equilibrium partitioni...
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
Profiles of equilibrium constants for self-association of aromatic molecules
NASA Astrophysics Data System (ADS)
Beshnova, Daria A.; Lantushenko, Anastasia O.; Davies, David B.; Evstigneev, Maxim P.
2009-04-01
Analysis of the noncovalent, noncooperative self-association of identical aromatic molecules assumes that the equilibrium self-association constants are either independent of the number of molecules (the EK-model) or change progressively with increasing aggregation (the AK-model). The dependence of the self-association constant on the number of molecules in the aggregate (i.e., the profile of the equilibrium constant) was empirically derived in the AK-model but, in order to provide some physical understanding of the profile, it is proposed that the sources for attenuation of the equilibrium constant are the loss of translational and rotational degrees of freedom, the ordering of molecules in the aggregates and the electrostatic contribution (for charged units). Expressions are derived for the profiles of the equilibrium constants for both neutral and charged molecules. Although the EK-model has been widely used in the analysis of experimental data, it is shown in this work that the derived equilibrium constant, KEK, depends on the concentration range used and hence, on the experimental method employed. The relationship has also been demonstrated between the equilibrium constant KEK and the real dimerization constant, KD, which shows that the value of KEK is always lower than KD.
Near-equilibrium dumb-bell-shaped figures for cohesionless small bodies
NASA Astrophysics Data System (ADS)
Descamps, Pascal
2016-02-01
In a previous paper (Descamps, P. [2015]. Icarus 245, 64-79), we developed a specific method aimed to retrieve the main physical characteristics (shape, density, surface scattering properties) of highly elongated bodies from their rotational lightcurves through the use of dumb-bell-shaped equilibrium figures. The present work is a test of this method. For that purpose we introduce near-equilibrium dumb-bell-shaped figures which are base dumb-bell equilibrium shapes modulated by lognormal statistics. Such synthetic irregular models are used to generate lightcurves from which our method is successfully applied. Shape statistical parameters of such near-equilibrium dumb-bell-shaped objects are in good agreement with those calculated for example for the Asteroid (216) Kleopatra from its dog-bone radar model. It may suggest that such bilobed and elongated asteroids can be approached by equilibrium figures perturbed be the interplay with a substantial internal friction modeled by a Gaussian random sphere.
Dynamical behaviors of inter-out-of-equilibrium state intervals in Korean futures exchange markets
NASA Astrophysics Data System (ADS)
Lim, Gyuchang; Kim, SooYong; Kim, Kyungsik; Lee, Dong-In; Scalas, Enrico
2008-05-01
A recently discovered feature of financial markets, the two-phase phenomenon, is utilized to categorize a financial time series into two phases, namely equilibrium and out-of-equilibrium states. For out-of-equilibrium states, we analyze the time intervals at which the state is revisited. The power-law distribution of inter-out-of-equilibrium state intervals is shown and we present an analogy with discrete-time heat bath dynamics, similar to random Ising systems. In the mean-field approximation, this model reduces to a one-dimensional multiplicative process. By varying global and local model parameters, the relevance between volatilities in financial markets and the interaction strengths between agents in the Ising model are investigated and discussed.
Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A
2016-01-07
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
NASA Astrophysics Data System (ADS)
Liou, Cheng-Dar
2015-09-01
This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.
Subspace Compressive GLRT Detector for MIMO Radar in the Presence of Clutter.
Bolisetti, Siva Karteek; Patwary, Mohammad; Ahmed, Khawza; Soliman, Abdel-Hamid; Abdel-Maguid, Mohamed
2015-01-01
The problem of optimising the target detection performance of MIMO radar in the presence of clutter is considered. The increased false alarm rate which is a consequence of the presence of clutter returns is known to seriously degrade the target detection performance of the radar target detector, especially under low SNR conditions. In this paper, a mathematical model is proposed to optimise the target detection performance of a MIMO radar detector in the presence of clutter. The number of samples that are required to be processed by a radar target detector regulates the amount of processing burden while achieving a given detection reliability. While Subspace Compressive GLRT (SSC-GLRT) detector is known to give optimised radar target detection performance with reduced computational complexity, it however suffers a significant deterioration in target detection performance in the presence of clutter. In this paper we provide evidence that the proposed mathematical model for SSC-GLRT detector outperforms the existing detectors in the presence of clutter. The performance analysis of the existing detectors and the proposed SSC-GLRT detector for MIMO radar in the presence of clutter are provided in this paper.
Optimisation of active suspension control inputs for improved vehicle ride performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2016-07-01
A collocation-type control variable optimisation method is used in the paper to analyse to which extent the fully active suspension (FAS) can improve the vehicle ride comfort while preserving the wheel holding ability. The method is first applied for a cosine-shaped bump road disturbance of different heights, and for both quarter-car and full 10 degree-of-freedom vehicle models. A nonlinear anti-wheel hop constraint is considered, and the influence of bump preview time period is analysed. The analysis is then extended to the case of square- or cosine-shaped pothole with different lengths, and the quarter-car model. In this case, the cost function is extended with FAS energy consumption and wheel damage resilience costs. The FAS action is found to be such to provide a wheel hop over the pothole, in order to avoid or minimise the damage at the pothole trailing edge. In the case of long pothole, when the FAS cannot provide the wheel hop, the wheel is travelling over the pothole bottom and then hops over the pothole trailing edge. The numerical optimisation results are accompanied by a simplified algebraic analysis.
Application of the adjoint optimisation of shock control bump for ONERA-M6 wing
NASA Astrophysics Data System (ADS)
Nejati, A.; Mazaheri, K.
2017-11-01
This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.
Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.
Garnavi, Rahil; Aldeen, Mohammad; Bailey, James
2012-11-01
This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.
Scarborough, Peter; Kaur, Asha; Cobiac, Linda; Owens, Paul; Parlesak, Alexandr; Sweeney, Kate; Rayner, Mike
2016-12-21
To model food group consumption and price of diet associated with achieving UK dietary recommendations while deviating as little as possible from the current UK diet, in order to support the redevelopment of the UK food-based dietary guidelines (now called the Eatwell Guide). Optimisation modelling, minimising an objective function of the difference between population mean modelled and current consumption of 125 food groups, and constraints of nutrient and food-based recommendations. The UK. Adults aged 19 years and above from the National Diet and Nutrition Survey 2008-2011. Proportion of diet consisting of major foods groups and price of the optimised diet. The optimised diet has an increase in consumption of 'potatoes, bread, rice, pasta and other starchy carbohydrates' (+69%) and 'fruit and vegetables' (+54%) and reductions in consumption of 'beans, pulses, fish, eggs, meat and other proteins' (-24%), 'dairy and alternatives' (-21%) and 'foods high in fat and sugar' (-53%). Results within food groups show considerable variety (eg, +90% for beans and pulses, -78% for red meat). The modelled diet would cost £5.99 (£5.93 to £6.05) per adult per day, very similar to the cost of the current diet: £6.02 (£5.96 to £6.08). The optimised diet would result in increased consumption of n-3 fatty acids and most micronutrients (including iron and folate), but decreased consumption of zinc and small decreases in consumption of calcium and riboflavin. To achieve the UK dietary recommendations would require large changes in the average diet of UK adults, including in food groups where current average consumption is well within the recommended range (eg, processed meat) or where there are no current recommendations (eg, dairy). These large changes in the diet will not lead to significant changes in the price of the diet. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Xiao, Fuyuan; Aritsugi, Masayoshi; Wang, Qing; Zhang, Rong
2016-09-01
For efficient and sophisticated analysis of complex event patterns that appear in streams of big data from health care information systems and support for decision-making, a triaxial hierarchical model is proposed in this paper. Our triaxial hierarchical model is developed by focusing on hierarchies among nested event pattern queries with an event concept hierarchy, thereby allowing us to identify the relationships among the expressions and sub-expressions of the queries extensively. We devise a cost-based heuristic by means of the triaxial hierarchical model to find an optimised query execution plan in terms of the costs of both the operators and the communications between them. According to the triaxial hierarchical model, we can also calculate how to reuse the results of the common sub-expressions in multiple queries. By integrating the optimised query execution plan with the reuse schemes, a multi-query optimisation strategy is developed to accomplish efficient processing of multiple nested event pattern queries. We present empirical studies in which the performance of multi-query optimisation strategy was examined under various stream input rates and workloads. Specifically, the workloads of pattern queries can be used for supporting monitoring patients' conditions. On the other hand, experiments with varying input rates of streams can correspond to changes of the numbers of patients that a system should manage, whereas burst input rates can correspond to changes of rushes of patients to be taken care of. The experimental results have shown that, in Workload 1, our proposal can improve about 4 and 2 times throughput comparing with the relative works, respectively; in Workload 2, our proposal can improve about 3 and 2 times throughput comparing with the relative works, respectively; in Workload 3, our proposal can improve about 6 times throughput comparing with the relative work. The experimental results demonstrated that our proposal was able to process complex queries efficiently which can support health information systems and further decision-making. Copyright © 2016 Elsevier B.V. All rights reserved.
Biomechanical stability analysis of the lambda-model controlling one joint.
Lan, L; Zhu, K Y
2007-06-01
Computer modeling and control of the human motor system might be helpful for understanding the mechanism of human motor system and for the diagnosis and treatment of neuromuscular disorders. In this paper, a brief view of the equilibrium point hypothesis for human motor system modeling is given, and the lambda-model derived from this hypothesis is studied. The stability of the lambda-model based on equilibrium and Jacobian matrix is investigated. The results obtained in this paper suggest that the lambda-model is stable and has a unique equilibrium point under certain conditions.
Modelling the effect of immigration on drinking behaviour.
Xiang, Hong; Zhu, Cheng-Cheng; Huo, Hai-Feng
2017-12-01
A drinking model with immigration is constructed. For the model with problem drinking immigration, the model admits only one problem drinking equilibrium. For the model without problem drinking immigration, the model has two equilibria, one is problem drinking-free equilibrium and the other is problem drinking equilibrium. By employing the method of Lyapunov function, stability of all kinds of equilibria is obtained. Numerical simulations are also provided to illustrate our analytical results. Our results show that alcohol immigrants increase the difficulty of the temperance work of the region.
Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.
ERIC Educational Resources Information Center
Russell, Joan M.
1988-01-01
Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)
NASA Astrophysics Data System (ADS)
Akıner, Tolga; Mason, Jeremy; Ertürk, Hakan
2017-11-01
The thermal properties of the TIP3P and TIP5P water models are investigated using equilibrium and non-equilibrium molecular dynamics techniques in the presence of solid surfaces. The performance of the non-equilibrium technique for rigid molecules is found to depend significantly on the distribution of atomic degrees of freedom. An improved approach to distribute atomic degrees of freedom is proposed for which the thermal conductivity of the TIP5P model agrees more closely with equilibrium molecular dynamics and experimental results than the existing state of the art.
Vapor-liquid equilibrium thermodynamics of N2 + CH4 - Model and Titan applications
NASA Technical Reports Server (NTRS)
Thompson, W. R.; Zollweg, John A.; Gabis, David H.
1992-01-01
A thermodynamic model is presented for vapor-liquid equilibrium in the N2 + CH4 system, which is implicated in calculations of the Titan tropospheric clouds' vapor-liquid equilibrium thermodynamics. This model imposes constraints on the consistency of experimental equilibrium data, and embodies temperature effects by encompassing enthalpy data; it readily calculates the saturation criteria, condensate composition, and latent heat for a given pressure-temperature profile of the Titan atmosphere. The N2 content of condensate is about half of that computed from Raoult's law, and about 30 percent greater than that computed from Henry's law.
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Out-of-equilibrium relaxation of the thermal Casimir effect in a model polarizable material
NASA Astrophysics Data System (ADS)
Dean, David S.; Démery, Vincent; Parsegian, V. Adrian; Podgornik, Rudolf
2012-03-01
Relaxation of the thermal Casimir or van der Waals force (the high temperature limit of the Casimir force) for a model dielectric medium is investigated. We start with a model of interacting polarization fields with a dynamics that leads to a frequency dependent dielectric constant of the Debye form. In the static limit, the usual zero frequency Matsubara mode component of the Casimir force is recovered. We then consider the out-of-equilibrium relaxation of the van der Waals force to its equilibrium value when two initially uncorrelated dielectric bodies are brought into sudden proximity. For the interaction between dielectric slabs, it is found that the spatial dependence of the out-of-equilibrium force is the same as the equilibrium one, but it has a time dependent amplitude, or Hamaker coefficient, which increases in time to its equilibrium value. The final relaxation of the force to its equilibrium value is exponential in systems with a single or finite number of polarization field relaxation times. However, in systems, such as those described by the Havriliak-Negami dielectric constant with a broad distribution of relaxation times, we observe a much slower power law decay to the equilibrium value.
NASA Astrophysics Data System (ADS)
van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François
2016-07-01
Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Wang, Tao; Chen, Liang; Shang, Hua-Yan
2017-08-01
In this paper, we apply a car-following model, fuel consumption model, emission model and electricity consumption model to explore the influences of energy consumption and emissions on each commuter's trip costs without late arrival at the equilibrium state. The numerical results show that the energy consumption and emissions have significant impacts on each commuter's trip cost without late arrival at the equilibrium state. The fuel cost and emission cost prominently enhance each commuter's trip cost and the trip cost increases with the number of vehicles, which shows that considering the fuel cost and emission cost in the trip cost will destroy the equilibrium state. However, the electricity cost slightly enhances each commuter's trip cost, but the trip cost is still approximately a constant, which indicates that considering the electricity cost in the trip cost does not destroy the equilibrium state.
NASA Astrophysics Data System (ADS)
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.
NASA Astrophysics Data System (ADS)
Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.
2017-09-01
In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.
Numerical Experiments Based on the Catastrophe Model of Solar Eruptions
NASA Astrophysics Data System (ADS)
Xie, X. Y.; Ziegler, U.; Mei, Z. X.; Wu, N.; Lin, J.
2017-11-01
On the basis of the catastrophe model developed by Isenberg et al., we use the NIRVANA code to perform the magnetohydrodynamics (MHD) numerical experiments to look into various behaviors of the coronal magnetic configuration that includes a current-carrying flux rope used to model the prominence levitating in the corona. These behaviors include the evolution in equilibrium heights of the flux rope versus the change in the background magnetic field, the corresponding internal equilibrium of the flux rope, dynamic properties of the flux rope after the system loses equilibrium, as well as the impact of the referential radius on the equilibrium heights of the flux rope. In our calculations, an empirical model of the coronal density distribution given by Sittler & Guhathakurta is used, and the physical diffusion is included. Our experiments show that the deviation of simulations in the equilibrium heights from the theoretical results exists, but is not apparent, and the evolutionary features of the two results are similar. If the flux rope is initially locate at the stable branch of the theoretical equilibrium curve, the flux rope will quickly reach the equilibrium position in the simulation after several rounds of oscillations as a result of the self-adjustment of the system; and the flux rope lose the equilibrium if the initial location of the flux rope is set at the critical point on the theoretical equilibrium curve. Correspondingly, the internal equilibrium of the flux rope can be reached as well, and the deviation from the theoretical results is somewhat apparent since the approximation of the small radius of the flux rope is lifted in our experiments, but such deviation does not affect the global equilibrium in the system. The impact of the referential radius on the equilibrium heights of the flux rope is consistent with the prediction of the theory. Our calculations indicate that the motion of the flux rope after the loss of equilibrium is consistent with which is predicted by the Lin-Forbes model and observations. Formation of the fast mode shock ahead of the flux rope is observed in our experiments. Outward motions of the flux rope are smooth, and magnetic energy is continuously converted into the other types of energy because both the diffusions are considered in calculations, and magnetic reconnection is allowed to occur successively in the current sheet behind the flux rope.
Yoshida, Wako; Dolan, Ray J.; Friston, Karl J.
2008-01-01
This paper introduces a model of ‘theory of mind’, namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a ‘game theory of mind’. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a ‘stag-hunt’. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution. PMID:19112488
End-to-end System Performance Simulation: A Data-Centric Approach
NASA Astrophysics Data System (ADS)
Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier
2013-08-01
In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
A Dealer Model of Foreign Exchange Market with Finite Assets
NASA Astrophysics Data System (ADS)
Hamano, Tomoya; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako
An agent-based model is introduced to study the finite-asset effect in foreign exchange markets. We find that the transacted price asymptotically approaches an equilibrium price, which is determined by the monetary balance between the pair of currencies. We phenomenologically derive a formula to estimate the equilibrium price, and we model its relaxation dynamics around the equilibrium price on the basis of a Langevin-like equation.
Analysis of non-equilibrium phenomena in inductively coupled plasma generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, W.; Panesi, M., E-mail: mpanesi@illinois.edu; Lani, A.
This work addresses the modeling of non-equilibrium phenomena in inductively coupled plasma discharges. In the proposed computational model, the electromagnetic induction equation is solved together with the set of Navier-Stokes equations in order to compute the electromagnetic and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles, with the method of Chapman and Enskog. Particle ambipolar diffusive fluxes are found by solving the Stefan-Maxwell equations with a simple iterative method. Two physico-mathematical formulations are used to model the chemical reaction processes: (1) Amore » Local Thermodynamics Equilibrium (LTE) formulation and (2) a thermo-chemical non-equilibrium (TCNEQ) formulation. In the TCNEQ model, thermal non-equilibrium between the translational energy mode of the gas and the vibrational energy mode of individual molecules is accounted for. The electronic states of the chemical species are assumed in equilibrium with the vibrational temperature, whereas the rotational energy mode is assumed to be equilibrated with translation. Three different physical models are used to account for the coupling of chemistry and energy transfer processes. Numerical simulations obtained with the LTE and TCNEQ formulations are used to characterize the extent of non-equilibrium of the flow inside the Plasmatron facility at the von Karman Institute. Each model was tested using different kinetic mechanisms to assess the sensitivity of the results to variations in the reaction parameters. A comparison of temperatures and composition profiles at the outlet of the torch demonstrates that the flow is in non-equilibrium for operating conditions characterized by pressures below 30 000 Pa, frequency 0.37 MHz, input power 80 kW, and mass flow 8 g/s.« less
Analysis of non-equilibrium phenomena in inductively coupled plasma generators
NASA Astrophysics Data System (ADS)
Zhang, W.; Lani, A.; Panesi, M.
2016-07-01
This work addresses the modeling of non-equilibrium phenomena in inductively coupled plasma discharges. In the proposed computational model, the electromagnetic induction equation is solved together with the set of Navier-Stokes equations in order to compute the electromagnetic and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles, with the method of Chapman and Enskog. Particle ambipolar diffusive fluxes are found by solving the Stefan-Maxwell equations with a simple iterative method. Two physico-mathematical formulations are used to model the chemical reaction processes: (1) A Local Thermodynamics Equilibrium (LTE) formulation and (2) a thermo-chemical non-equilibrium (TCNEQ) formulation. In the TCNEQ model, thermal non-equilibrium between the translational energy mode of the gas and the vibrational energy mode of individual molecules is accounted for. The electronic states of the chemical species are assumed in equilibrium with the vibrational temperature, whereas the rotational energy mode is assumed to be equilibrated with translation. Three different physical models are used to account for the coupling of chemistry and energy transfer processes. Numerical simulations obtained with the LTE and TCNEQ formulations are used to characterize the extent of non-equilibrium of the flow inside the Plasmatron facility at the von Karman Institute. Each model was tested using different kinetic mechanisms to assess the sensitivity of the results to variations in the reaction parameters. A comparison of temperatures and composition profiles at the outlet of the torch demonstrates that the flow is in non-equilibrium for operating conditions characterized by pressures below 30 000 Pa, frequency 0.37 MHz, input power 80 kW, and mass flow 8 g/s.
Petri-net-based 2D design of DNA walker circuits.
Gilbert, David; Heiner, Monika; Rohr, Christian
2018-01-01
We consider localised DNA computation, where a DNA strand walks along a binary decision graph to compute a binary function. One of the challenges for the design of reliable walker circuits consists in leakage transitions, which occur when a walker jumps into another branch of the decision graph. We automatically identify leakage transitions, which allows for a detailed qualitative and quantitative assessment of circuit designs, design comparison, and design optimisation. The ability to identify leakage transitions is an important step in the process of optimising DNA circuit layouts where the aim is to minimise the computational error inherent in a circuit while minimising the area of the circuit. Our 2D modelling approach of DNA walker circuits relies on coloured stochastic Petri nets which enable functionality, topology and dimensionality all to be integrated in one two-dimensional model. Our modelling and analysis approach can be easily extended to 3-dimensional walker systems.
Deman, P R; Kaiser, T M; Dirckx, J J; Offeciers, F E; Peeters, S A
2003-09-30
A 48 contact cochlear implant electrode has been constructed for electrical stimulation of the auditory nerve. The stimulating contacts of this electrode are organised in two layers: 31 contacts on the upper surface directed towards the habenula perforata and 17 contacts connected together as one longitudinal contact on the underside. The design of the electrode carrier aims to make radial current flow possible in the cochlea. The mechanical structure of the newly designed electrode was optimised to obtain maximal insertion depth. Electrode insertion tests were performed in a transparent acrylic model of the human cochlea.
Palmer, Tim N.; O’Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173
NASA Astrophysics Data System (ADS)
Meyer, Jennifer; Wisdom, J.
2007-07-01
The heating in Enceladus in an equilibrium resonant configuration with other saturnian satellites can be estimated independently of the physical properties of Enceladus. Our results update the values obtained for the equilibrium tidal heating found by Lissauer et al. (1984) and Peale (2003). We find that equilibrium tidal heating cannot account for the heat that is observed to be coming from Enceladus, and current heating rates are even less for conventional estimates of the Love number for Enceladus. Even allowing for a much larger dynamic Love number, as can occur in viscoelastic models (Ross and Schubert, 1989), the equilibrium tidal heating is less than the heat observed to be coming from Enceladus. One resolution is that the tidal equilibrium is unstable and that the system oscillates about equilibrium. Yoder (1981) suggested that Enceladus might oscillate about equilibrium if the Q of Enceladus is stress dependent. An alternate suggestion was made by Ojakangas and Stevenson (1986), who emphasized the possible temperature dependence of Q. In these models Enceladus would now be releasing heat stored during a recent high eccentricity phase. However, we have shown that the Ojakangas and Stevenson model does not produce oscillations for parameters appropriate for Enceladus. Other low-order resonance configurations are possible for the saturnian satellites in the past. These include the 3:2 Mimas-Enceladus and the 3:4 Enceladus-Tethys resonances. The latter resonance has no equilibrium because the orbits are diverging, and the former has an equilibrium heating rate of only 0.48 GW. So equilibrium heating at past resonances is no more successful at explaining past resurfacing events than equilibrium heating is at explaining the present activity.
Analysis of the car body stability performance after coupler jack-knifing during braking
NASA Astrophysics Data System (ADS)
Guo, Lirong; Wang, Kaiyun; Chen, Zaigang; Shi, Zhiyong; Lv, Kaikai; Ji, Tiancheng
2018-06-01
This paper aims to improve car body stability performance by optimising locomotive parameters when coupler jack-knifing occurs during braking. In order to prevent car body instability behaviour caused by coupler jack-knifing, a multi-locomotive simulation model and a series of field braking tests are developed to analyse the influence of the secondary suspension and the secondary lateral stopper on the car body stability performance during braking. According to simulation and test results, increasing secondary lateral stiffness contributes to limit car body yaw angle during braking. However, it seriously affects the dynamic performance of the locomotive. For the secondary lateral stopper, its lateral stiffness and free clearance have a significant influence on improving the car body stability capacity, and have less effect on the dynamic performance of the locomotive. An optimised measure was proposed and adopted on the test locomotive. For the optimised locomotive, the lateral stiffness of secondary lateral stopper is increased to 7875 kN/m, while its free clearance is decreased to 10 mm. The optimised locomotive has excellent dynamic and safety performance. Comparing with the original locomotive, the maximum car body yaw angle and coupler rotation angle of the optimised locomotive were reduced by 59.25% and 53.19%, respectively, according to the practical application. The maximum derailment coefficient was 0.32, and the maximum wheelset lateral force was 39.5 kN. Hence, reasonable parameters of secondary lateral stopper can improve the car body stability capacity and the running safety of the heavy haul locomotive.
Cahyaningrum, Fitrianna; Permadhi, Inge; Ansari, Muhammad Ridwan; Prafiantini, Erfi; Rachman, Purnawati Hustina; Agustina, Rina
2016-12-01
Diets with a specific omega-6/omega-3 fatty acid ratio have been reported to have favourable effects in controlling obesity in adults. However, development a local-based diet by considering the ratio of these fatty acids for improving the nutritional status of overweight and obese children is lacking. Therefore, using linear programming, we developed an affordable optimised diet focusing on the ratio of omega- 6/omega-3 fatty acid intake for obese children aged 12-23 months. A crosssectional study was conducted in two subdistricts of East Jakarta involving 42 normal-weight and 29 overweight and obese children, grouped on the basis of their body mass index for-age Z scores and selected through multistage random sampling. A 24-h recall was performed for 3-nonconsecutive days to assess the children's dietary intake levels and food patterns. We conducted group and structured interviews as well as market surveys to identify food availability, accessibility and affordability. Three types of affordable optimised 7-day diet meal plans were developed on the basis of breastfeeding status. The optimised diet plan fulfilled energy and macronutrient intake requirements within the acceptable macronutrient distribution range. The omega-6/omega-3 fatty acid ratio in the children was between 4 and 10. Moreover, the micronutrient intake level was within the range of the recommended daily allowance or estimated average recommendation and tolerable upper intake level. The optimisation model used in this study provides a mathematical solution for economical diet meal plans that approximate the nutrient requirements for overweight and obese children.
Mechanical approach to chemical transport
Kocherginsky, Nikolai; Gruebele, Martin
2016-01-01
Nonequilibrium thermodynamics describes the rates of transport phenomena with the aid of various thermodynamic forces, but often the phenomenological transport coefficients are not known, and the description is not easily connected with equilibrium relations. We present a simple and intuitive model to address these issues. Our model is based on Lagrangian dynamics for chemical systems with dissipation, so one may think of the model as physicochemical mechanics. Using one main equation, the model allows a systematic derivation of all transport and equilibrium equations, subject to the limitation that heat generated or absorbed in the system must be small for the model to be valid. A table with all major examples of transport and equilibrium processes described using physicochemical mechanics is given. In equilibrium, physicochemical mechanics reduces to standard thermodynamics and the Gibbs–Duhem relation, and we show that the First and Second Laws of thermodynamics are satisfied for our system plus bath model. Out of equilibrium, our model provides relationships between transport coefficients and describes system evolution in the presence of several simultaneous external fields. The model also leads to an extension of the Onsager–Casimir reciprocal relations for properties simultaneously transported by many components. PMID:27647899
Equilibrium pricing in an order book environment: Case study for a spin model
NASA Astrophysics Data System (ADS)
Meudt, Frederik; Schmitt, Thilo A.; Schäfer, Rudi; Guhr, Thomas
2016-07-01
When modeling stock market dynamics, the price formation is often based on an equilibrium mechanism. In real stock exchanges, however, the price formation is governed by the order book. It is thus interesting to check if the resulting stylized facts of a model with equilibrium pricing change, remain the same or, more generally, are compatible with the order book environment. We tackle this issue in the framework of a case study by embedding the Bornholdt-Kaizoji-Fujiwara spin model into the order book dynamics. To this end, we use a recently developed agent based model that realistically incorporates the order book. We find realistic stylized facts. We conclude for the studied case that equilibrium pricing is not needed and that the corresponding assumption of a ;fundamental; price may be abandoned.
NASA Astrophysics Data System (ADS)
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
Optimised in vitro applicable loads for the simulation of lateral bending in the lumbar spine.
Dreischarf, Marcel; Rohlmann, Antonius; Bergmann, Georg; Zander, Thomas
2012-07-01
In in vitro studies of the lumbar spine simplified loading modes (compressive follower force, pure moment) are usually employed to simulate the standard load cases flexion-extension, axial rotation and lateral bending of the upper body. However, the magnitudes of these loads vary widely in the literature. Thus the results of current studies may lead to unrealistic values and are hardly comparable. It is still unknown which load magnitudes lead to a realistic simulation of maximum lateral bending. A validated finite element model of the lumbar spine was used in an optimisation study to determine which magnitudes of the compressive follower force and bending moment deliver results that fit best with averaged in vivo data. The best agreement with averaged in vivo measured data was found for a compressive follower force of 700 N and a lateral bending moment of 7.8 Nm. These results show that loading modes that differ strongly from the optimised one may not realistically simulate maximum lateral bending. The simplified but in vitro applicable loading cannot perfectly mimic the in vivo situation. However, the optimised magnitudes are those which agree best with averaged in vivo measured data. Its consequent application would lead to a better comparability of different investigations. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis
Waterfall, Christy M.; Cobb, Benjamin D.
2001-01-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.
Waterfall, C M; Cobb, B D
2001-12-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simplified Thermo-Chemical Modelling For Hypersonic Flow
NASA Astrophysics Data System (ADS)
Sancho, Jorge; Alvarez, Paula; Gonzalez, Ezequiel; Rodriguez, Manuel
2011-05-01
Hypersonic flows are connected with high temperatures, generally associated with strong shock waves that appear in such flows. At high temperatures vibrational degrees of freedom of the molecules may become excited, the molecules may dissociate into atoms, the molecules or free atoms may ionize, and molecular or ionic species, unimportant at lower temperatures, may be formed. In order to take into account these effects, a chemical model is needed, but this model should be simplified in order to be handled by a CFD code, but with a sufficient precision to take into account the physics more important. This work is related to a chemical non-equilibrium model validation, implemented into a commercial CFD code, in order to obtain the flow field around bodies in hypersonic flow. The selected non-equilibrium model is composed of seven species and six direct reactions together with their inverse. The commercial CFD code where the non- equilibrium model has been implemented is FLUENT. For the validation, the X38/Sphynx Mach 20 case is rebuilt on a reduced geometry, including the 1/3 Lref forebody. This case has been run in laminar regime, non catalytic wall and with radiative equilibrium wall temperature. The validated non-equilibrium model is applied to the EXPERT (European Experimental Re-entry Test-bed) vehicle at a specified trajectory point (Mach number 14). This case has been run also in laminar regime, non catalytic wall and with radiative equilibrium wall temperature.
Local thermodynamic equilibrium for globally disequilibrium open systems under stress
NASA Astrophysics Data System (ADS)
Podladchikov, Yury
2016-04-01
Predictive modeling of far and near equilibrium processes is essential for understanding of patterns formation and for quantifying of natural processes that are never in global equilibrium. Methods of both equilibrium and non-equilibrium thermodynamics are needed and have to be combined. For example, predicting temperature evolution due to heat conduction requires simultaneous use of equilibrium relationship between internal energy and temperature via heat capacity (the caloric equation of state) and disequilibrium relationship between heat flux and temperature gradient. Similarly, modeling of rocks deforming under stress, reactions in system open for the porous fluid flow, or kinetic overstepping of the equilibrium reaction boundary necessarily needs both equilibrium and disequilibrium material properties measured under fundamentally different laboratory conditions. Classical irreversible thermodynamics (CIT) is the well-developed discipline providing the working recipes for the combined application of mutually exclusive experimental data such as density and chemical potential at rest under constant pressure and temperature and viscosity of the flow under stress. Several examples will be presented.
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
The Existence and Stability Analysis of the Equilibria in Dengue Disease Infection Model
NASA Astrophysics Data System (ADS)
Anggriani, N.; Supriatna, A. K.; Soewono, E.
2015-06-01
In this paper we formulate an SIR (Susceptible - Infective - Recovered) model of Dengue fever transmission with constant recruitment. We found a threshold parameter K0, known as the Basic Reproduction Number (BRN). This model has two equilibria, disease-free equilibrium and endemic equilibrium. By constructing suitable Lyapunov function, we show that the disease- free equilibrium is globally asymptotic stable whenever BRN is less than one and when it is greater than one, the endemic equilibrium is globally asymptotic stable. Numerical result shows the dynamic of each compartment together with effect of multiple bio-agent intervention as a control to the dengue transmission.
Ionization and excitation in cool giant stars. I - Hydrogen and helium
NASA Technical Reports Server (NTRS)
Luttermoser, Donald G.; Johnson, Hollis R.
1992-01-01
The influence that non-LTE radiative transfer has on the electron density, ionization equilibrium, and excitation equilibrium in model atmospheres representative of both oxygen-rich and carbon-rich red giant stars is demonstrated. The radiative transfer and statistical equilibrium equations are solved self-consistently for H, H(-), H2, He I, C I, C II, Na I, Mg I, Mg II, Ca I, and Ca II in a plane-parallel static medium. Calculations are made for both radiative-equilibrium model photospheres alone and model photospheres with attached chromospheric models as determined semiempirically with IUE spectra of g Her (M6 III) and TX Psc (C6, 2). The excitation and ionization results for hydrogen and helium are reported.
NASA Astrophysics Data System (ADS)
Fan, Zhengfeng; Liu, Jie
2016-10-01
We present an ion-electron non-equilibrium model, in which the hot-spot ion temperature is higher than its electron temperature so that the hot-spot nuclear reactions are enhanced while energy leaks are considerably reduced. Theoretical analysis shows that the ignition region would be significantly enlarged in the hot-spot rhoR-T space as compared with the commonly used equilibrium model. Simulations show that shocks could be utilized to create and maintain non-equilibrium conditions within the hot spot, and the hot-spot rhoR requirement is remarkably reduced for achieving self-heating. In NIF high-foot implosions, it is observed that the x-ray enhancement factors are less than unity, which is not self-consistent and is caused by assuming Te =Ti. And from this non-consistency, we could infer that ion-electron non-equilibrium exists in the high-foot implosions and the ion temperature could be 9% larger than the equilibrium temperature.
Non-equilibrium phase transitions in a driven-dissipative system of interacting bosons
NASA Astrophysics Data System (ADS)
Young, Jeremy T.; Foss-Feig, Michael; Gorshkov, Alexey V.; Maghrebi, Mohammad F.
2017-04-01
Atomic, molecular, and optical systems provide unique opportunities to study simple models of driven-dissipative many-body quantum systems. Typically, one is interested in the resultant steady state, but the non-equilibrium nature of the physics involved presents several problems in understanding its behavior theoretically. Recently, it has been shown that in many of these models, it is possible to map the steady-state phase transitions onto classical equilibrium phase transitions. In the language of Keldysh field theory, this relation typically only becomes apparent after integrating out massive fields near the critical point, leaving behind a single massless field undergoing near-equilibrium dynamics. In this talk, we study a driven-dissipative XXZ bosonic model and discover critical points at which two fields become gapless. Each critical point separates three different possible phases: a uniform phase, an anti-ferromagnetic phase, and a limit cycle phase. Furthermore, a description in terms of an equilibrium phase transition does not seem possible, so the associated phase transitions appear to be inherently non-equilibrium.
Liang, Hua; Deng, Liufu; Chmura, Steven; Burnette, Byron; Liadis, Nicole; Darga, Thomas; Beckett, Michael A.; Lingen, Mark W.; Witt, MaryEllyn; Weichselbaum, Ralph R.; Fu, Yang-Xin
2013-01-01
Local failures following radiation therapy are multifactorial and the contributions of the tumor and the host are complex. Current models of tumor equilibrium suggest that a balance exists between cell birth and cell death due to insufficient angiogenesis, immune effects, or intrinsic cellular factors. We investigated whether host immune responses contribute to radiation induced tumor equilibrium in animal models. We report an essential role for immune cells and their cytokines in suppressing tumor cell regrowth in two experimental animal model systems. Depletion of T cells or neutralization of interferon-gamma reversed radiation-induced equilibrium leading to tumor regrowth. We also demonstrate that PD-L1 blockade augments T cell responses leading to rejection of tumors in radiation induced equilibrium. We identify an active interplay between tumor cells and immune cells that occurs in radiation-induced tumor equilibrium and suggest a potential role for disruption of the PD-L1/PD-1 axis in increasing local tumor control. PMID:23630355
Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph; ...
2017-07-04
Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less
NASA Astrophysics Data System (ADS)
Chen, Yu-Ren; Dye, Chung-Yuan
2013-06-01
In most of the inventory models in the literature, the deterioration rate of goods is viewed as an exogenous variable, which is not subject to control. In the real market, the retailer can reduce the deterioration rate of product by making effective capital investment in storehouse equipments. In this study, we formulate a deteriorating inventory model with time-varying demand by allowing preservation technology cost as a decision variable in conjunction with replacement policy. The objective is to find the optimal replenishment and preservation technology investment strategies while minimising the total cost over the planning horizon. For any given feasible replenishment scheme, we first prove that the optimal preservation technology investment strategy not only exists but is also unique. Then, a particle swarm optimisation is coded and used to solve the nonlinear programming problem by employing the properties derived from this article. Some numerical examples are used to illustrate the features of the proposed model.
Wagh, Rajesh V; Chatli, Manish K
2017-05-01
In the present study, processing parameters for the extraction of phenolic rich sea buckthorn seed (SBTE) extract were optimised using response surface method and subjected for in vitro efficacy viz. total phenolic, ABTS, DPPH and SASA activity. The optimised model depicted MeOH as a solvent at 60% concentration level with a reaction time of 20 min and extracting temperature of 55 °C for the highest yield and total phenolic content. The efficacy of different concentration of obtained SBT was evaluated in raw ground pork as a model meat system on the basis of various physico-chemical, microbiological, sensory quality characteristics. Addition of 0.3% SBTE significantly reduced the lipid peroxidation (PV, TBARS and FFA) and improved instrumental colour ( L* , a*, b* ) attributes of raw ground pork during refrigerated storage of 9 days. Results concluded that SBTE at 0.3% level can successfully improve the oxidative stability, microbial, sensory quality attributes in the meat model system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph
Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
LONG-TERM STABLE EQUILIBRIA FOR SYNCHRONOUS BINARY ASTEROIDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobson, Seth A.; Scheeres, Daniel J.
Synchronous binary asteroids may exist in a long-term stable equilibrium, where the opposing torques from mutual body tides and the binary YORP (BYORP) effect cancel. Interior of this equilibrium, mutual body tides are stronger than the BYORP effect and the mutual orbit semimajor axis expands to the equilibrium; outside of the equilibrium, the BYORP effect dominates the evolution and the system semimajor axis will contract to the equilibrium. If the observed population of small (0.1-10 km diameter) synchronous binaries are in static configurations that are no longer evolving, then this would be confirmed by a null result in the observationalmore » tests for the BYORP effect. The confirmed existence of this equilibrium combined with a shape model of the secondary of the system enables the direct study of asteroid geophysics through the tidal theory. The observed synchronous asteroid population cannot exist in this equilibrium if described by the canonical 'monolithic' geophysical model. The 'rubble pile' geophysical model proposed by Goldreich and Sari is sufficient, however it predicts a tidal Love number directly proportional to the radius of the asteroid, while the best fit to the data predicts a tidal Love number inversely proportional to the radius. This deviation from the canonical and Goldreich and Sari models motivates future study of asteroid geophysics. Ongoing BYORP detection campaigns will determine whether these systems are in an equilibrium, and future determination of secondary shapes will allow direct determination of asteroid geophysical parameters.« less
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
Phylogenies support out-of-equilibrium models of biodiversity.
Manceau, Marc; Lambert, Amaury; Morlon, Hélène
2015-04-01
There is a long tradition in ecology of studying models of biodiversity at equilibrium. These models, including the influential Neutral Theory of Biodiversity, have been successful at predicting major macroecological patterns, such as species abundance distributions. But they have failed to predict macroevolutionary patterns, such as those captured in phylogenetic trees. Here, we develop a model of biodiversity in which all individuals have identical demographic rates, metacommunity size is allowed to vary stochastically according to population dynamics, and speciation arises naturally from the accumulation of point mutations. We show that this model generates phylogenies matching those observed in nature if the metacommunity is out of equilibrium. We develop a likelihood inference framework that allows fitting our model to empirical phylogenies, and apply this framework to various mammalian families. Our results corroborate the hypothesis that biodiversity dynamics are out of equilibrium. © 2015 John Wiley & Sons Ltd/CNRS.
NASA Astrophysics Data System (ADS)
Ferreira, Ana C. M.; Teixeira, Senhorinha F. C. F.; Silva, Rui G.; Silva, Ângela M.
2018-04-01
Cogeneration allows the optimal use of the primary energy sources and significant reductions in carbon emissions. Its use has great potential for applications in the residential sector. This study aims to develop a methodology for thermal-economic optimisation of small-scale micro-gas turbine for cogeneration purposes, able to fulfil domestic energy needs with a thermal power out of 125 kW. A constrained non-linear optimisation model was built. The objective function is the maximisation of the annual worth from the combined heat and power, representing the balance between the annual incomes and the expenditures subject to physical and economic constraints. A genetic algorithm coded in the java programming language was developed. An optimal micro-gas turbine able to produce 103.5 kW of electrical power with a positive annual profit (i.e. 11,925 €/year) was disclosed. The investment can be recovered in 4 years and 9 months, which is less than half of system lifetime expectancy.
Fabrication and optimisation of a fused filament 3D-printed microfluidic platform
NASA Astrophysics Data System (ADS)
Tothill, A. M.; Partridge, M.; James, S. W.; Tatam, R. P.
2017-03-01
A 3D-printed microfluidic device was designed and manufactured using a low cost (2000) consumer grade fusion deposition modelling (FDM) 3D printer. FDM printers are not typically used, or are capable, of producing the fine detailed structures required for microfluidic fabrication. However, in this work, the optical transparency of the device was improved through manufacture optimisation to such a point that optical colorimetric assays can be performed in a 50 µl device. A colorimetric enzymatic cascade assay was optimised using glucose oxidase and horseradish peroxidase for the oxidative coupling of aminoantipyrine and chromotropic acid to produce a blue quinoneimine dye with a broad absorbance peaking at 590 nm for the quantification of glucose in solution. For comparison the assay was run in standard 96 well plates with a commercial plate reader. The results show the accurate and reproducible quantification of 0-10 mM glucose solution using a 3D-printed microfluidic optical device with performance comparable to that of a plate reader assay.
NASA Astrophysics Data System (ADS)
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.
Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.
García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M
2014-12-01
Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of sorption kinetics on the fate and transport of pharmaceuticals in estuaries.
Liu, Dong; Lung, Wu-Seng; Colosi, Lisa M
2013-08-01
Many current fate and transport models based on the assumption of instantaneous sorption equilibrium of contaminants in the water column may not be valid for certain pharmaceuticals with long times to reach sorption equilibrium. In this study, a sorption kinetics model was developed and incorporated into a water quality model for the Patuxent River Estuary to evaluate the effect of sorption kinetics. Model results indicate that the assumption of instantaneous sorption equilibrium results in significant under-prediction of water column concentrations for some pharmaceuticals. The relative difference between predicted concentrations for the instantaneous versus kinetic approach is as large as 150% at upstream locations in the Patuxent Estuary. At downstream locations, where sorption processes have had sufficient time to reach equilibrium, the relative difference decreases to roughly 25%. This indicates that sorption kinetics affect a model's ability to capture accumulation of pharmaceuticals into riverbeds and the transport of pharmaceuticals in estuaries. These results offer strong evidence that chemicals are not removed from the water column as rapidly as has been assumed on the basis of equilibrium-based analyses. The findings are applicable not only for pharmaceutical compounds, but also for diverse contaminants that reach sorption equilibrium slowly. Copyright © 2013 Elsevier Ltd. All rights reserved.
Seol, Yeonee; Hardin, Ashley H.; Strub, Marie-Paule; Charvin, Gilles; Neuman, Keir C.
2013-01-01
Type II topoisomerases are essential enzymes that regulate DNA topology through a strand-passage mechanism. Some type II topoisomerases relax supercoils, unknot and decatenate DNA to below thermodynamic equilibrium. Several models of this non-equilibrium topology simplification phenomenon have been proposed. The kinetic proofreading (KPR) model postulates that strand passage requires a DNA-bound topoisomerase to collide twice in rapid succession with a second DNA segment, implying a quadratic relationship between DNA collision frequency and relaxation rate. To test this model, we used a single-molecule assay to measure the unlinking rate as a function of DNA collision frequency for Escherichia coli topoisomerase IV (topo IV) that displays efficient non-equilibrium topology simplification activity, and for E. coli topoisomerase III (topo III), a type IA topoisomerase that unlinks and unknots DNA to equilibrium levels. Contrary to the predictions of the KPR model, topo IV and topo III unlinking rates were linearly related to the DNA collision frequency. Furthermore, topo III exhibited decatenation activity comparable with that of topo IV, supporting proposed roles for topo III in DNA segregation. This study enables us to rule out the KPR model for non-equilibrium topology simplification. More generally, we establish an experimental approach to systematically control DNA collision frequency. PMID:23460205
Renehan, Emma; Goeman, Dianne; Koch, Susan
2017-07-20
In Australia, dementia is a national health priority. With the rising number of people living with dementia and shortage of formal and informal carers predicted in the near future, developing approaches to coordinating services in quality-focused ways is considered an urgent priority. Key worker support models are one approach that have been used to assist people living with dementia and their caring unit coordinate services and navigate service systems; however, there is limited literature outlining comprehensive frameworks for the implementation of community dementia key worker roles in practice. In this paper an optimised key worker framework for people with dementia, their family and caring unit living in the community is developed and presented. A number of processes were undertaken to inform the development of a co-designed optimised key worker framework: an expert working and reference group; a systematic review of the literature; and a qualitative evaluation of 14 dementia key worker models operating in Australia involving 14 interviews with organisation managers, 19 with key workers and 15 with people living with dementia and/or their caring unit. Data from the systematic review and evaluation of dementia key worker models were analysed by the researchers and the expert working and reference group using a constant comparative approach to define the essential components of the optimised framework. The developed framework consisted of four main components: overarching philosophies; organisational context; role definition; and key worker competencies. A number of more clearly defined sub-themes sat under each component. Reflected in the framework is the complexity of the dementia journey and the difficulty in trying to develop a 'one size fits all' approach. This co-designed study led to the development of an evidence based framework which outlines a comprehensive synthesis of components viewed as being essential to the implementation of a dementia key worker model of care in the community. The framework was informed and endorsed by people living with dementia and their caring unit, key workers, managers, Australian industry experts, policy makers and researchers. An evaluation of its effectiveness and relevance for practice within the dementia care space is required.
Learning of Chemical Equilibrium through Modelling-Based Teaching
ERIC Educational Resources Information Center
Maia, Poliana Flavia; Justi, Rosaria
2009-01-01
This paper presents and discusses students' learning process of chemical equilibrium from a modelling-based approach developed from the use of the "Model of Modelling" diagram. The investigation was conducted in a regular classroom (students 14-15 years old) and aimed at discussing how modelling-based teaching can contribute to students…
Role of delay and screening in controlling AIDS
NASA Astrophysics Data System (ADS)
Chauhan, Sudipa; Bhatia, Sumit Kaur; Gupta, Surbhi
2016-06-01
We propose a non-linear HIV/ AIDS model to analyse the spread and control of HIV/AIDS. The population is divided into three classes, susceptible, infective and AIDS patients. The model is developed under the assumptions of vertical transmission and time delay in infective class. Time delay is also included to show sexual maturity period of infected newborns. We study dynamics of the model and obtain the reproduction number. Now to control the epidemic, we study the model where aware infective class is also added, i.e., people are made aware of their medical status by way of screening. To make the model more realistic, we consider the situation where aware infective class also interacts with other people. The model is analysed qualitatively by stability theory of ODE. Stability analysis of both disease-free and endemic equilibrium is studied based on reproduction number. Also, it is proved that if (R0)1, R1 ≤ 1 then, disease free equilibrium point is locally asymptotically stable and if (R0)1, R1 > 1 then, disease free equilibrium is unstable. Also, the stability analysis of endemic equilibrium point has been done and it is shown that for (R0)1 > 1 endemic equilibrium point is stable. Global stability analysis of endemic equilibrium point has also been done. At last, it is shown numerically that the delay in sexual maturity of infected individuals result in less number of AIDS patients.
NASA Astrophysics Data System (ADS)
Pusateri, Elise Noel
An Electromagnetic Pulse (EMP) can severely disrupt the use of electronic devices in its path causing a significant amount of infrastructural damage. EMP can also cause breakdown of the surrounding atmosphere during lightning discharges. This makes modeling EMP phenomenon an important research effort in many military and atmospheric physics applications. EMP events include high-energy Compton electrons or photoelectrons that ionize air and produce low energy conduction electrons. A sufficient number of conduction electrons will damp or alter the EMP through conduction current. Therefore, it is important to understand how conduction electrons interact with air in order to accurately predict the EMP evolution and propagation in the air. It is common for EMP simulation codes to use an equilibrium ohmic model for computing the conduction current. Equilibrium ohmic models assume the conduction electrons are always in equilibrium with the local instantaneous electric field, i.e. for a specific EMP electric field, the conduction electrons instantaneously reach steady state without a transient process. An equilibrium model will work well if the electrons have time to reach their equilibrium distribution with respect to the rise time or duration of the EMP. If the time to reach equilibrium is comparable or longer than the rise time or duration of the EMP then the equilibrium model would not accurately predict the conduction current necessary for the EMP simulation. This is because transport coefficients used in the conduction current calculation will be found based on equilibrium reactions rates which may differ significantly from their non-equilibrium values. We see this deficiency in Los Alamos National Laboratory's EMP code, CHAP-LA (Compton High Altitude Pulse-Los Alamos), when modeling certain EMP scenarios at high altitudes, such as upward EMP, where the ionization rate by secondary electrons is over predicted by the equilibrium model, causing the EMP to short abruptly. The objective of the PhD research is to mitigate this effect by integrating a conduction electron model into CHAP-LA which can calculate the conduction current based on a non-equilibrium electron distribution. We propose to use an electron swarm model to monitor the time evolution of conduction electrons in the EMP environment which is characterized by electric field and pressure. Swarm theory uses various collision frequencies and reaction rates to study how the electron distribution and the resultant transport coefficients change with time, ultimately reaching an equilibrium distribution. Validation of the swarm model we develop is a necessary step for completion of the thesis work. After validation, the swarm model is integrated in the air chemistry model CHAP-LA employs for conduction electron simulations. We test high altitude EMP simulations with the swarm model option in the air chemistry model to show improvements in the computational capability of CHAP-LA. A swarm model has been developed that is based on a previous swarm model developed by Higgins, Longmire and O'Dell 1973, hereinafter HLO. The code used for the swarm model calculation solves a system of coupled differential equations for electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, including the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are recalculated and compared to the previously reported empirical results given by HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford 2005. BOLSIG+ utilizes updated electron scattering cross sections that are defined over an expanded energy range found in the atomic and molecular cross section database published by Phelps in the Phelps Database 2014 on the LXcat website created by Pancheshnyi et al. 2012. The swarm model is also updated from the original HLO model by including additional physical parameters such as the O2 electron attachment rate, recombination rate, and mutual neutralization rate. This necessitates tracking the positive and negative ion densities in the swarm model. Adding these parameters, especially electron attachment, is important at lower EMP altitudes where atmospheric density is high. We compare swarm model equilibrium temperatures and times using the HLO and BOLSIG+ coefficients for a uniform electric field of 1 StatV/cm for a range of atmospheric heights. This is done in order to test sensitivity to the swarm parameters used in the swarm model. It is shown that the equilibrium temperature and time are sensitive to the modifications in the collision frequency and ionization rate based on the updated electron interaction cross sections. We validate the swarm model by comparing ionization coefficients and equilibrium drift velocities to experimental results over a wide range of reduced electric field values. The final part of the PhD thesis work includes integrating the swarm model into CHAP-LA. We discuss the physics included in the CHAP-LA EMP model and demonstrate EMP damping behavior caused by the ohmic model at high altitudes. We report on numerical techniques for incorporation of the swarm model into CHAP-LA's Maxwell solver. This includes a discussion of integration techniques for Maxwell's equations in CHAP-LA using the swarm model calculated conduction current. We show improvements on EMP parameter calculations when modeling a high altitude, upward EMP scenario. This provides a novel computational capability that will have an important impact on the atmospheric and EMP research community.
Optimising predictor domains for spatially coherent precipitation downscaling
NASA Astrophysics Data System (ADS)
Radanovics, S.; Vidal, J.-P.; Sauquet, E.; Ben Daoud, A.; Bontron, G.
2013-10-01
Statistical downscaling is widely used to overcome the scale gap between predictors from numerical weather prediction models or global circulation models and predictands like local precipitation, required for example for medium-term operational forecasts or climate change impact studies. The predictors are considered over a given spatial domain which is rarely optimised with respect to the target predictand location. In this study, an extended version of the growing rectangular domain algorithm is proposed to provide an ensemble of near-optimum predictor domains for a statistical downscaling method. This algorithm is applied to find five-member ensembles of near-optimum geopotential predictor domains for an analogue downscaling method for 608 individual target zones covering France. Results first show that very similar downscaling performances based on the continuous ranked probability score (CRPS) can be achieved by different predictor domains for any specific target zone, demonstrating the need for considering alternative domains in this context of high equifinality. A second result is the large diversity of optimised predictor domains over the country that questions the commonly made hypothesis of a common predictor domain for large areas. The domain centres are mainly distributed following the geographical location of the target location, but there are apparent differences between the windward and the lee side of mountain ridges. Moreover, domains for target zones located in southeastern France are centred more east and south than the ones for target locations on the same longitude. The size of the optimised domains tends to be larger in the southeastern part of the country, while domains with a very small meridional extent can be found in an east-west band around 47° N. Sensitivity experiments finally show that results are rather insensitive to the starting point of the optimisation algorithm except for zones located in the transition area north of this east-west band. Results also appear generally robust with respect to the archive length considered for the analogue method, except for zones with high interannual variability like in the Cévennes area. This study paves the way for defining regions with homogeneous geopotential predictor domains for precipitation downscaling over France, and therefore de facto ensuring the spatial coherence required for hydrological applications.
A model on CME/Flare initiation: Loss of Equilibrium caused by mass loss of quiescent prominences
NASA Astrophysics Data System (ADS)
Miley, George; Chon Nam, Sok; Kim, Mun Song; Kim, Jik Su
2015-08-01
Coronal Mass Ejections (CMEs) model should give an answer to enough energy storage for giant bulk plasma into interplanetary space to escape against the sun’s gravitation and its explosive eruption. Advocates of ‘Mass Loading’ model (e.g. Low, B. 1996, SP, 167, 217) suggested a simple mechanism of CME initiation, the loss of mass from a prominence anchoring magnetic flux rope, but they did not associate the mass loss with the loss of equilibrium. The catastrophic loss of equilibrium model is considered as to be a prospective CME/Flare model to explain sudden eruption of magnetic flux systems. Isenberg, P. A., et al (1993, ApJ, 417, 368)developed ideal magnetohydrodynamic theory of the magnetic flux rope to show occurrence of catastrophic loss of equilibrium according to increasing magnetic flux transported into corona.We begin with extending their study including gravity on prominence’s material to obtain equilibrium curves in case of given mass parameters, which are the strengths of the gravitational force compared with the characteristic magnetic force. Furthermore, we study quasi-static evolution of the system including massive prominence flux rope and current sheet below it to obtain equilibrium curves of prominence’s height according to decreasing mass parameter in a properly fixed magnetic environment. The curves show equilibrium loss behaviors to imply that mass loss result in equilibrium loss. Released fractions of magnetic energy are greater than corresponding zero-mass case. This eruption mechanism is expected to be able to apply to the eruptions of quiescent prominences, which is located in relatively weak magnetic environment with 105 km of scale length and 10G of photospheric magnetic field.
González-Méijome, José M; López-Alemany, Antonio; Lira, Madalena; Almeida, José B; Oliveira, M Elisabete C D Real; Parafita, Manuel A
2007-01-01
The purpose of the present study was to develop mathematical relationships that allow obtaining equilibrium water content and refractive index of conventional and silicone hydrogel soft contact lenses from refractive index measures obtained with automated refractometry or equilibrium water content measures derived from manual refractometry, respectively. Twelve HEMA-based hydrogels of different hydration and four siloxane-based polymers were assayed. A manual refractometer and a digital refractometer were used. Polynomial models obtained from the sucrose curves of equilibrium water content against refractive index and vice-versa were used either considering the whole range of sucrose concentrations (16-100% equilibrium water content) or a range confined to the equilibrium water content of current soft contact lenses (approximately 20-80% equilibrium water content). Values of equilibrium water content measured with the Atago N-2E and those derived from the refractive index measurement with CLR 12-70 by the applications of sucrose-based models displayed a strong linear correlation (r2 = 0.978). The same correlations were obtained when the models are applied to obtain refractive index values from the Atago N-2E and compared with those (values) given by the CLR 12-70 (r2 = 0.978). No significantly different results are obtained between models derived from the whole range of the sucrose solution or the model limited to the normal range of soft contact lens hydration. Present results will have implications for future experimental and clinical research regarding normal hydration and dehydration experiments with hydrogel polymers, and particularly in the field of contact lenses. 2006 Wiley Periodicals, Inc.
Andersen, Mathias Bækbo; Frey, Jared; Pennathur, Sumita; Bruus, Henrik
2011-01-01
We present a combined theoretical and experimental analysis of the solid-liquid interface of fused-silica nanofabricated channels with and without a hydrophilic 3-cyanopropyldimethylchlorosilane (cyanosilane) coating. We develop a model that relaxes the assumption that the surface parameters C(1), C(2), and pK(+) are constant and independent of surface composition. Our theoretical model consists of three parts: (i) a chemical equilibrium model of the bare or coated wall, (ii) a chemical equilibrium model of the buffered bulk electrolyte, and (iii) a self-consistent Gouy-Chapman-Stern triple-layer model of the electrochemical double layer coupling these two equilibrium models. To validate our model, we used both pH-sensitive dye-based capillary filling experiments as well as electro-osmotic current-monitoring measurements. Using our model we predict the dependence of ζ potential, surface charge density, and capillary filling length ratio on ionic strength for different surface compositions, which can be difficult to achieve otherwise. Copyright © 2010 Elsevier Inc. All rights reserved.
Dynamics of a Delayed HIV-1 Infection Model with Saturation Incidence Rate and CTL Immune Response
NASA Astrophysics Data System (ADS)
Guo, Ting; Liu, Haihong; Xu, Chenglin; Yan, Fang
2016-12-01
In this paper, we investigate the dynamics of a five-dimensional virus model incorporating saturation incidence rate, CTL immune response and three time delays which represent the latent period, virus production period and immune response delay, respectively. We begin this model by proving the positivity and boundedness of the solutions. Our model admits three possible equilibrium solutions, namely the infection-free equilibrium E0, the infectious equilibrium without immune response E1 and the infectious equilibrium with immune response E2. Moreover, by analyzing corresponding characteristic equations, the local stability of each of the feasible equilibria and the existence of Hopf bifurcation at the equilibrium point E2 are established, respectively. Further, by using fluctuation lemma and suitable Lyapunov functionals, it is shown that E0 is globally asymptotically stable when the basic reproductive numbers for viral infection R0 is less than unity. When the basic reproductive numbers for immune response R1 is less than unity and R0 is greater than unity, the equilibrium point E1 is globally asymptotically stable. Finally, some numerical simulations are carried out for illustrating the theoretical results.
A Harris-Todaro Agent-Based Model to Rural-Urban Migration
NASA Astrophysics Data System (ADS)
Espíndola, Aquino L.; Silveira, Jaylson J.; Penna, T. J. P.
2006-09-01
The Harris-Todaro model of the rural-urban migration process is revisited under an agent-based approach. The migration of the workers is interpreted as a process of social learning by imitation, formalized by a computational model. By simulating this model, we observe a transitional dynamics with continuous growth of the urban fraction of overall population toward an equilibrium. Such an equilibrium is characterized by stabilization of rural-urban expected wages differential (generalized Harris-Todaro equilibrium condition), urban concentration and urban unemployment. These classic results obtained originally by Harris and Todaro are emergent properties of our model.
Non-equilibrium Quasi-Chemical Nucleation Model
NASA Astrophysics Data System (ADS)
Gorbachev, Yuriy E.
2018-04-01
Quasi-chemical model, which is widely used for nucleation description, is revised on the basis of recent results in studying of non-equilibrium effects in reacting gas mixtures (Kolesnichenko and Gorbachev in Appl Math Model 34:3778-3790, 2010; Shock Waves 23:635-648, 2013; Shock Waves 27:333-374, 2017). Non-equilibrium effects in chemical reactions are caused by the chemical reactions themselves and therefore these contributions should be taken into account in the corresponding expressions for reaction rates. Corrections to quasi-equilibrium reaction rates are of two types: (a) spatially homogeneous (caused by physical-chemical processes) and (b) spatially inhomogeneous (caused by gas expansion/compression processes and proportional to the velocity divergency). Both of these processes play an important role during the nucleation and are included into the proposed model. The method developed for solving the generalized Boltzmann equation for chemically reactive gases is applied for solving the set of equations of the revised quasi-chemical model. It is shown that non-equilibrium processes lead to essential deviation of the quasi-stationary distribution and therefore the nucleation rate from its traditional form.
A rumor transmission model with incubation in social networks
NASA Astrophysics Data System (ADS)
Jia, Jianwen; Wu, Wenjiang
2018-02-01
In this paper, we propose a rumor transmission model with incubation period and constant recruitment in social networks. By carrying out an analysis of the model, we study the stability of rumor-free equilibrium and come to the local stable condition of the rumor equilibrium. We use the geometric approach for ordinary differential equations for showing the global stability of the rumor equilibrium. And when ℜ0 = 1, the new model occurs a transcritical bifurcation. Furthermore, numerical simulations are used to support the analysis. At last, some conclusions are presented.
Fischer, Michael
2015-10-14
The chabazite-type silicoaluminophosphate SAPO-34 is a promising adsorbent for applications in thermal energy storage using water adsorption-desorption cycles. In order to develop a microscopic understanding of the impact of local heterogeneities and defects on the water adsorption properties, the interaction of different models of SAPO-34 with water was studied using dispersion-corrected density-functional theory (DFT-D) calculations. In addition to SAPO-34 with isolated silicon atoms, the calculations considered models incorporating two types of heterogeneities (silicon islands, aluminosilicate domains), and two defect-containing (partially and fully desilicated) systems. DFT-D optimisations were performed for systems with small amounts of adsorbed water, in which all H2O molecules can interact with framework protons, and systems with large amounts of adsorbed water (30 H2O molecules per unit cell). At low loadings, the host-guest interaction energy calculated for SAPO-34 with isolated Si atoms amounts to approximately -90 kJ mol(-1). While the presence of local heterogeneities leads to the creation of some adsorption sites that are energetically slightly more favourable, the interaction strength is drastically reduced in systems with defects. At high water loadings, energies in the range of -70 kJ mol(-1) are obtained for all models. The DFT-D interaction energies are in good agreement with experimentally measured heats of water adsorption. A detailed analysis of the equilibrium structures was used to gain insights into the binding modes at low coverages, and to assess the extent of framework deprotonation and changes in the coordination environment of aluminium atoms at high water loadings.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Optimisation of logistics processes of energy grass collection
NASA Astrophysics Data System (ADS)
Bányai, Tamás.
2010-05-01
The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu
NASA Astrophysics Data System (ADS)
Hurford, Anthony; Harou, Julien
2015-04-01
Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.
Mghirbi, Oussama; LE Grusse, Philippe; Fabre, Jacques; Mandart, Elisabeth; Bord, Jean-Paul
2017-03-01
The health, environmental and socio-economic issues related to the massive use of plant protection products are a concern for all the stakeholders involved in the agricultural sector. These stakeholders, including farmers and territorial actors, have expressed a need for decision-support tools for the management of diffuse pollution related to plant protection practices and their impacts. To meet the needs expressed by the public authorities and the territorial actors for such decision-support tools, we have developed a technical-economic model "OptiPhy" for risk mitigation based on indicators of pesticide toxicity risk to applicator health (IRSA) and to the environment (IRTE), under the constraint of suitable economic outcomes. This technical-economic optimisation model is based on linear programming techniques and offers various scenarios to help the different actors in choosing plant protection products, depending on their different levels of constraints and aspirations. The health and environmental risk indicators can be broken down into sub-indicators so that management can be tailored to the context. This model for technical-economic optimisation and management of plant protection practices can analyse scenarios for the reduction of pesticide-related risks by proposing combinations of substitution PPPs, according to criteria of efficiency, economic performance and vulnerability of the natural environment. The results of the scenarios obtained on real ITKs in different cropping systems show that it is possible to reduce the PPP pressure (TFI) and reduce toxicity risks to applicator health (IRSA) and to the environment (IRTE) by up to approximately 50 %.
Medellín-Azuara, Josué; Harou, Julien J; Howitt, Richard E
2010-11-01
Given the high proportion of water used for agriculture in certain regions, the economic value of agricultural water can be an important tool for water management and policy development. This value is quantified using economic demand curves for irrigation water. Such demand functions show the incremental contribution of water to agricultural production. Water demand curves are estimated using econometric or optimisation techniques. Calibrated agricultural optimisation models allow the derivation of demand curves using smaller datasets than econometric models. This paper introduces these subject areas then explores the effect of spatial aggregation (upscaling) on the valuation of water for irrigated agriculture. A case study from the Rio Grande-Rio Bravo Basin in North Mexico investigates differences in valuation at farm and regional aggregated levels under four scenarios: technological change, warm-dry climate change, changes in agricultural commodity prices, and water costs for agriculture. The scenarios consider changes due to external shocks or new policies. Positive mathematical programming (PMP), a calibrated optimisation method, is the deductive valuation method used. An exponential cost function is compared to the quadratic cost functions typically used in PMP. Results indicate that the economic value of water at the farm level and the regionally aggregated level are similar, but that the variability and distributional effects of each scenario are affected by aggregation. Moderately aggregated agricultural production models are effective at capturing average-farm adaptation to policy changes and external shocks. Farm-level models best reveal the distribution of scenario impacts. Copyright © 2009 Elsevier B.V. All rights reserved.
Aveling, Emma-Louise; Martin, Graham; Jiménez García, Senai; Martin, Lisa; Herbert, Georgia; Armstrong, Natalie; Dixon-Woods, Mary; Woolhouse, Ian
2012-12-01
Peer review offers a promising way of promoting improvement in health systems, but the optimal model is not yet clear. We aimed to describe a specific peer review model-reciprocal peer-to-peer review (RP2PR)-to identify the features that appeared to support optimal functioning. We conducted an ethnographic study involving observations, interviews and documentary analysis of the Improving Lung Cancer Outcomes Project, which involved 30 paired multidisciplinary lung cancer teams participating in facilitated reciprocal site visits. Analysis was based on the constant comparative method. Fundamental features of the model include multidisciplinary participation, a focus on discussion and observation of teams in action, rather than paperwork; facilitated reflection and discussion on data and observations; support to develop focused improvement plans. Five key features were identified as important in optimising this model: peers and pairing methods; minimising logistic burden; structure of visits; independent facilitation; and credibility of the process. Facilitated RP2PR was generally a positive experience for participants, but implementing improvement plans was challenging and required substantial support. RP2PR appears to be optimised when it is well organised; a safe environment for learning is created; credibility is maximised; implementation and impact are supported. RP2PR is seen as credible and legitimate by lung cancer teams and can act as a powerful stimulus to produce focused quality improvement plans and to support implementation. Our findings have identified how RP2PR functioned and may be optimised to provide a constructive, open space for identifying opportunities for improvement and solutions.
A general equilibrium model of a production economy with asset markets
NASA Astrophysics Data System (ADS)
Raberto, Marco; Teglio, Andrea; Cincotti, Silvano
2006-10-01
In this paper, a general equilibrium model of a monetary production economy is presented. The model is characterized by three classes of agents: a representative firm, heterogeneous households, and the government. Two markets (i.e., a labour market and a goods market, are considered) and two assets are traded in exchange of money, namely, government bonds and equities. Households provide the labour force and decide on consumption and savings, whereas the firm provides consumption goods and demands labour. The government receives taxes from households and pays interests on debt. The Walrasian equilibrium is derived analytically. The dynamics through quantity constrained equilibria out from the Walrasian equilibrium is also studied by means of computer simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Z.; Anthony, R.G.; Miller, J.E.
1997-06-01
An equilibrium multicomponent ion exchange model is presented for the ion exchange of group I metals by TAM-5, a hydrous crystalline silicotitanate. On the basis of the data from ion exchange and structure studies, the solid phase is represented as Na{sub 3}X instead of the usual form of NaX. By using this solid phase representation, the solid can be considered as an ideal phase. A set of model ion exchange reactions is proposed for ion exchange between H{sup +}, Na{sup +}, K{sup +}, Rb{sup +}, and Cs{sup +}. The equilibrium constants for these reactions were estimated from experiments with simplemore » ion exchange systems. Bromley`s model for activity coefficients of electrolytic solutions was used to account for liquid phase nonideality. Bromley`s model parameters for CsOH at high ionic strength and for NO{sub 2}{sup {minus}} and Al(OH){sub 4}{sup {minus}} were estimated in order to apply the model for complex waste simulants. The equilibrium compositions and distribution coefficients of counterions were calculated for complex simulants typical of DOE wastes by solving the equilibrium equations for the model reactions and material balance equations. The predictions match the experimental results within 10% for all of these solutions.« less
Prosperi, Mattia C. F.; Rosen-Zvi, Michal; Altmann, André; Zazzi, Maurizio; Di Giambenedetto, Simona; Kaiser, Rolf; Schülter, Eugen; Struck, Daniel; Sloot, Peter; van de Vijver, David A.; Vandamme, Anne-Mieke; Sönnerborg, Anders
2010-01-01
Background Although genotypic resistance testing (GRT) is recommended to guide combination antiretroviral therapy (cART), funding and/or facilities to perform GRT may not be available in low to middle income countries. Since treatment history (TH) impacts response to subsequent therapy, we investigated a set of statistical learning models to optimise cART in the absence of GRT information. Methods and Findings The EuResist database was used to extract 8-week and 24-week treatment change episodes (TCE) with GRT and additional clinical, demographic and TH information. Random Forest (RF) classification was used to predict 8- and 24-week success, defined as undetectable HIV-1 RNA, comparing nested models including (i) GRT+TH and (ii) TH without GRT, using multiple cross-validation and area under the receiver operating characteristic curve (AUC). Virological success was achieved in 68.2% and 68.0% of TCE at 8- and 24-weeks (n = 2,831 and 2,579), respectively. RF (i) and (ii) showed comparable performances, with an average (st.dev.) AUC 0.77 (0.031) vs. 0.757 (0.035) at 8-weeks, 0.834 (0.027) vs. 0.821 (0.025) at 24-weeks. Sensitivity analyses, carried out on a data subset that included antiretroviral regimens commonly used in low to middle income countries, confirmed our findings. Training on subtype B and validation on non-B isolates resulted in a decline of performance for models (i) and (ii). Conclusions Treatment history-based RF prediction models are comparable to GRT-based for classification of virological outcome. These results may be relevant for therapy optimisation in areas where availability of GRT is limited. Further investigations are required in order to account for different demographics, subtypes and different therapy switching strategies. PMID:21060792
Light-activated control of protein channel assembly mediated by membrane mechanics
NASA Astrophysics Data System (ADS)
Miller, David M.; Findlay, Heather E.; Ces, Oscar; Templer, Richard H.; Booth, Paula J.
2016-12-01
Photochemical processes provide versatile triggers of chemical reactions. Here, we use a photoactivated lipid switch to modulate the folding and assembly of a protein channel within a model biological membrane. In contrast to the information rich field of water-soluble protein folding, there is only a limited understanding of the assembly of proteins that are integral to biological membranes. It is however possible to exploit the foreboding hydrophobic lipid environment and control membrane protein folding via lipid bilayer mechanics. Mechanical properties such as lipid chain lateral pressure influence the insertion and folding of proteins in membranes, with different stages of folding having contrasting sensitivities to the bilayer properties. Studies to date have relied on altering bilayer properties through lipid compositional changes made at equilibrium, and thus can only be made before or after folding. We show that light-activation of photoisomerisable di-(5-[[4-(4-butylphenyl)azo]phenoxy]pentyl)phosphate (4-Azo-5P) lipids influences the folding and assembly of the pentameric bacterial mechanosensitive channel MscL. The use of a photochemical reaction enables the bilayer properties to be altered during folding, which is unprecedented. This mechanical manipulation during folding, allows for optimisation of different stages of the component insertion, folding and assembly steps within the same lipid system. The photochemical approach offers the potential to control channel assembly when generating synthetic devices that exploit the mechanosensitive protein as a nanovalve.
NASA Astrophysics Data System (ADS)
Du, Xinzhong; Shrestha, Narayan Kumar; Ficklin, Darren L.; Wang, Junye
2018-04-01
Stream temperature is an important indicator for biodiversity and sustainability in aquatic ecosystems. The stream temperature model currently in the Soil and Water Assessment Tool (SWAT) only considers the impact of air temperature on stream temperature, while the hydroclimatological stream temperature model developed within the SWAT model considers hydrology and the impact of air temperature in simulating the water-air heat transfer process. In this study, we modified the hydroclimatological model by including the equilibrium temperature approach to model heat transfer processes at the water-air interface, which reflects the influences of air temperature, solar radiation, wind speed and streamflow conditions on the heat transfer process. The thermal capacity of the streamflow is modeled by the variation of the stream water depth. An advantage of this equilibrium temperature model is the simple parameterization, with only two parameters added to model the heat transfer processes. The equilibrium temperature model proposed in this study is applied and tested in the Athabasca River basin (ARB) in Alberta, Canada. The model is calibrated and validated at five stations throughout different parts of the ARB, where close to monthly samplings of stream temperatures are available. The results indicate that the equilibrium temperature model proposed in this study provided better and more consistent performances for the different regions of the ARB with the values of the Nash-Sutcliffe Efficiency coefficient (NSE) greater than those of the original SWAT model and the hydroclimatological model. To test the model performance for different hydrological and environmental conditions, the equilibrium temperature model was also applied to the North Fork Tolt River Watershed in Washington, United States. The results indicate a reasonable simulation of stream temperature using the model proposed in this study, with minimum relative error values compared to the other two models. However, the NSE values were lower than those of the hydroclimatological model, indicating that more model verification needs to be done. The equilibrium temperature model uses existing SWAT meteorological data as input, can be calibrated using fewer parameters and less effort and has an overall better performance in stream temperature simulation. Thus, it can be used as an effective tool for predicting the changes in stream temperature regimes under varying hydrological and meteorological conditions. In addition, the impact of the stream temperature simulations on chemical reaction rates and concentrations was tested. The results indicate that the improved performance of the stream temperature simulation could significantly affect chemical reaction rates and the simulated concentrations, and the equilibrium temperature model could be a potential tool to model stream temperature in water quality simulations.
Non-equilibrium STLS approach to transport properties of single impurity Anderson model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezai, Raheleh, E-mail: R_Rezai@sbu.ac.ir; Ebrahimi, Farshad, E-mail: Ebrahimi@sbu.ac.ir
In this work, using the non-equilibrium Keldysh formalism, we study the effects of the electron–electron interaction and the electron-spin correlation on the non-equilibrium Kondo effect and the transport properties of the symmetric single impurity Anderson model (SIAM) at zero temperature by generalizing the self-consistent method of Singwi, Tosi, Land, and Sjolander (STLS) for a single-band tight-binding model with Hubbard type interaction to out of equilibrium steady-states. We at first determine in a self-consistent manner the non-equilibrium spin correlation function, the effective Hubbard interaction, and the double-occupancy at the impurity site. Then, using the non-equilibrium STLS spin polarization function in themore » non-equilibrium formalism of the iterative perturbation theory (IPT) of Yosida and Yamada, and Horvatic and Zlatic, we compute the spectral density, the current–voltage characteristics and the differential conductance as functions of the applied bias and the strength of on-site Hubbard interaction. We compare our spectral densities at zero bias with the results of numerical renormalization group (NRG) and depict the effects of the electron–electron interaction and electron-spin correlation at the impurity site on the aforementioned properties by comparing our numerical result with the order U{sup 2} IPT. Finally, we show that the obtained numerical results on the differential conductance have a quadratic universal scaling behavior and the resulting Kondo temperature shows an exponential behavior. -- Highlights: •We introduce for the first time the non-equilibrium method of STLS for Hubbard type models. •We determine the transport properties of SIAM using the non-equilibrium STLS method. •We compare our results with order-U2 IPT and NRG. •We show that non-equilibrium STLS, contrary to the GW and self-consistent RPA, produces the two Hubbard peaks in DOS. •We show that the method keeps the universal scaling behavior and correct exponential behavior of Kondo temperature.« less
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
NASA Astrophysics Data System (ADS)
Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper
2015-05-01
In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.
NASA Astrophysics Data System (ADS)
van Schaik, Joris W. J.; Kleja, Dan B.; Gustafsson, Jon Petter
2010-02-01
Vast amounts of knowledge about the proton- and metal-binding properties of dissolved organic matter (DOM) in natural waters have been obtained in studies on isolated humic and fulvic (hydrophobic) acids. Although macromolecular hydrophilic acids normally make up about one-third of DOM, their proton- and metal-binding properties are poorly known. Here, we investigated the acid-base and Cu-binding properties of the hydrophobic (fulvic) acid fraction and two hydrophilic fractions isolated from a soil solution. Proton titrations revealed a higher total charge for the hydrophilic acid fractions than for the hydrophobic acid fraction. The most hydrophilic fraction appeared to be dominated by weak acid sites, as evidenced by increased slope of the curve of surface charge versus pH at pH values above 6. The titration curves were poorly predicted by both Stockholm Humic Model (SHM) and NICA-Donnan model calculations using generic parameter values, but could be modelled accurately after optimisation of the proton-binding parameters (pH ⩽ 9). Cu-binding isotherms for the three fractions were determined at pH values of 4, 6 and 9. With the optimised proton-binding parameters, the SHM model predictions for Cu binding improved, whereas the NICA-Donnan predictions deteriorated. After optimisation of Cu-binding parameters, both models described the experimental data satisfactorily. Iron(III) and aluminium competed strongly with Cu for binding sites at both pH 4 and pH 6. The SHM model predicted this competition reasonably well, but the NICA-Donnan model underestimated the effects significantly at pH 6. Overall, the Cu-binding behaviour of the two hydrophilic acid fractions was very similar to that of the hydrophobic acid fraction, despite the differences observed in proton-binding characteristics. These results show that for modelling purposes, it is essential to include the hydrophilic acid fraction in the pool of 'active' humic substances.
NASA Astrophysics Data System (ADS)
Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.
2017-04-01
We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
NASA Astrophysics Data System (ADS)
Dubey, M.; Chandra, H.; Kumar, Anil
2016-02-01
A thermal modelling for the performance evaluation of gas turbine cogeneration system with reheat is presented in this paper. The Joule-Brayton cogeneration reheat cycle is based on the total useful energy rate (TUER) has been optimised and the efficiency at the maximum TUER is determined. The variation of maximum dimensionless TUER and efficiency at maximum TUER with respect to cycle temperature ratio have also been analysed. From the results, it has been found that the dimensionless maximum TUER and the corresponding thermal efficiency decrease with the increase in power to heat ratio. The result also shows that the inclusion of reheat significantly improves the overall performance of the cycle. From the thermodynamic performance point of view, this methodology may be quite useful in the selection and comparison of combined energy production systems.
Operational modes, health, and status monitoring
NASA Astrophysics Data System (ADS)
Taljaard, Corrie
2016-08-01
System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.
Understanding the role of monolayers in retarding evaporation from water storage bodies
NASA Astrophysics Data System (ADS)
Fellows, Christopher M.; Coop, Paul A.; Lamb, David W.; Bradbury, Ronald C.; Schiretz, Helmut F.; Woolley, Andrew J.
2015-03-01
Retardation of evaporation by monomolecular films by a 'barrier model' does not explain the effect of air velocity on relative evaporation rates in the presence and absence of such films. An alternative mechanism for retardation of evaporation attributes reduced evaporation to a reduction of surface roughness, which in turn increases the effective vapour pressure of water above the surface. Evaporation suppression effectiveness under field conditions should be predictable from measurements of the surface dilational modulus of monolayers and research directed to optimising this mechanism should be more fruitful than research aimed at optimising a monolayer to provide an impermeable barrier.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel
2014-01-01
This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm3 at a gasification temperature of 1500 K and equivalence ratio of 0.15. PMID:27433487
Rupesh, Shanmughom; Muraleedharan, Chandrasekharan; Arun, Palatel
2014-01-01
This work investigates the potential of coconut shell for air-steam gasification using thermodynamic equilibrium model. A thermodynamic equilibrium model considering tar and realistic char conversion was developed using MATLAB software to predict the product gas composition. After comparing it with experimental results the prediction capability of the model is enhanced by multiplying equilibrium constants with suitable coefficients. The modified model is used to study the effect of key process parameters like temperature, steam to biomass ratio, and equivalence ratio on product gas yield, composition, and heating value of syngas along with gasification efficiency. For a steam to biomass ratio of unity, the maximum mole fraction of hydrogen in the product gas is found to be 36.14% with a lower heating value of 7.49 MJ/Nm(3) at a gasification temperature of 1500 K and equivalence ratio of 0.15.
NASA Technical Reports Server (NTRS)
Glass, Christopher E.
1990-01-01
The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.
NASA Astrophysics Data System (ADS)
Glass, Christopher E.
1990-08-01
The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.
NASA Astrophysics Data System (ADS)
Faghihi, Mustafa; Scheffel, Jan; Spies, Guenther O.
1988-05-01
Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure.
Nagarajan, Ramanathan
2015-07-01
Micelles generated in water from most amphiphilic block copolymers are widely recognized to be non-equilibrium structures. Typically, the micelles are prepared by a kinetic process, first allowing molecular scale dissolution of the block copolymer in a common solvent that likes both the blocks and then gradually replacing the common solvent by water to promote the hydrophobic blocks to aggregate and create the micelles. The non-equilibrium nature of the micelle originates from the fact that dynamic exchange between the block copolymer molecules in the micelle and the singly dispersed block copolymer molecules in water is suppressed, because of the glassy nature of the core forming polymer block and/or its very large hydrophobicity. Although most amphiphilic block copolymers generate such non-equilibrium micelles, no theoretical approach to a priori predict the micelle characteristics currently exists. In this work, we propose a predictive approach for non-equilibrium micelles with glassy cores by applying the equilibrium theory of micelles in two steps. In the first, we calculate the properties of micelles formed in the mixed solvent while true equilibrium prevails, until the micelle core becomes glassy. In the second step, we freeze the micelle aggregation number at this glassy state and calculate the corona dimension from the equilibrium theory of micelles. The condition when the micelle core becomes glassy is independently determined from a statistical thermodynamic treatment of diluent effect on polymer glass transition temperature. The predictions based on this "non-equilibrium" model compare reasonably well with experimental data for polystyrene-polyethylene oxide diblock copolymer, which is the most extensively studied system in the literature. In contrast, the application of the equilibrium model to describe such a system significantly overpredicts the micelle core and corona dimensions and the aggregation number. The non-equilibrium model suggests ways to obtain different micelle sizes for the same block copolymer, by the choices we can make of the common solvent and the mode of solvent substitution. Published by Elsevier Inc.
Numerical simulation of hypersonic inlet flows with equilibrium or finite rate chemistry
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao; Hsieh, Kwang-Chung; Shuen, Jian-Shun; Mcbride, Bonnie J.
1988-01-01
An efficient numerical program incorporated with comprehensive high temperature gas property models has been developed to simulate hypersonic inlet flows. The computer program employs an implicit lower-upper time marching scheme to solve the two-dimensional Navier-Stokes equations with variable thermodynamic and transport properties. Both finite-rate and local-equilibrium approaches are adopted in the chemical reaction model for dissociation and ionization of the inlet air. In the finite rate approach, eleven species equations coupled with fluid dynamic equations are solved simultaneously. In the local-equilibrium approach, instead of solving species equations, an efficient chemical equilibrium package has been developed and incorporated into the flow code to obtain chemical compositions directly. Gas properties for the reaction products species are calculated by methods of statistical mechanics and fit to a polynomial form for C(p). In the present study, since the chemical reaction time is comparable to the flow residence time, the local-equilibrium model underpredicts the temperature in the shock layer. Significant differences of predicted chemical compositions in shock layer between finite rate and local-equilibrium approaches have been observed.
Stability and Optimal Harvesting of Modified Leslie-Gower Predator-Prey Model
NASA Astrophysics Data System (ADS)
Toaha, S.; Azis, M. I.
2018-03-01
This paper studies a modified of dynamics of Leslie-Gower predator-prey population model. The model is stated as a system of first order differential equations. The model consists of one predator and one prey. The Holling type II as a predation function is considered in this model. The predator and prey populations are assumed to be beneficial and then the two populations are harvested with constant efforts. Existence and stability of the interior equilibrium point are analysed. Linearization method is used to get the linearized model and the eigenvalue is used to justify the stability of the interior equilibrium point. From the analyses, we show that under a certain condition the interior equilibrium point exists and is locally asymptotically stable. For the model with constant efforts of harvesting, cost function, revenue function, and profit function are considered. The stable interior equilibrium point is then related to the maximum profit problem as well as net present value of revenues problem. We show that there exists a certain value of the efforts that maximizes the profit function and net present value of revenues while the interior equilibrium point remains stable. This means that the populations can live in coexistence for a long time and also maximize the benefit even though the populations are harvested with constant efforts.
Schmidt, Ronny; Cook, Elizabeth A; Kastelic, Damjana; Taussig, Michael J; Stoevesandt, Oda
2013-08-02
We have previously described a protein arraying process based on cell free expression from DNA template arrays (DNA Array to Protein Array, DAPA). Here, we have investigated the influence of different array support coatings (Ni-NTA, Epoxy, 3D-Epoxy and Polyethylene glycol methacrylate (PEGMA)). Their optimal combination yields an increased amount of detected protein and an optimised spot morphology on the resulting protein array compared to the previously published protocol. The specificity of protein capture was improved using a tag-specific capture antibody on a protein repellent surface coating. The conditions for protein expression were optimised to yield the maximum amount of protein or the best detection results using specific monoclonal antibodies or a scaffold binder against the expressed targets. The optimised DAPA system was able to increase by threefold the expression of a representative model protein while conserving recognition by a specific antibody. The amount of expressed protein in DAPA was comparable to those of classically spotted protein arrays. Reaction conditions can be tailored to suit the application of interest. DAPA represents a cost effective, easy and convenient way of producing protein arrays on demand. The reported work is expected to facilitate the application of DAPA for personalized medicine and screening purposes. Copyright © 2013 Elsevier B.V. All rights reserved.
Model of head-neck joint fast movements in the frontal plane.
Pedrocchi, A; Ferrigno, G
2004-06-01
The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.
Finite-deformation phase-field chemomechanics for multiphase, multicomponent solids
NASA Astrophysics Data System (ADS)
Svendsen, Bob; Shanthraj, Pratheek; Raabe, Dierk
2018-03-01
The purpose of this work is the development of a framework for the formulation of geometrically non-linear inelastic chemomechanical models for a mixture of multiple chemical components diffusing among multiple transforming solid phases. The focus here is on general model formulation. No specific model or application is pursued in this work. To this end, basic balance and constitutive relations from non-equilibrium thermodynamics and continuum mixture theory are combined with a phase-field-based description of multicomponent solid phases and their interfaces. Solid phase modeling is based in particular on a chemomechanical free energy and stress relaxation via the evolution of phase-specific concentration fields, order-parameter fields (e.g., related to chemical ordering, structural ordering, or defects), and local internal variables. At the mixture level, differences or contrasts in phase composition and phase local deformation in phase interface regions are treated as mixture internal variables. In this context, various phase interface models are considered. In the equilibrium limit, phase contrasts in composition and local deformation in the phase interface region are determined via bulk energy minimization. On the chemical side, the equilibrium limit of the current model formulation reduces to a multicomponent, multiphase, generalization of existing two-phase binary alloy interface equilibrium conditions (e.g., KKS). On the mechanical side, the equilibrium limit of one interface model considered represents a multiphase generalization of Reuss-Sachs conditions from mechanical homogenization theory. Analogously, other interface models considered represent generalizations of interface equilibrium conditions consistent with laminate and sharp-interface theory. In the last part of the work, selected existing models are formulated within the current framework as special cases and discussed in detail.
Emami, Fereshteh; Maeder, Marcel; Abdollahi, Hamid
2015-05-07
Thermodynamic studies of equilibrium chemical reactions linked with kinetic procedures are mostly impossible by traditional approaches. In this work, the new concept of generalized kinetic study of thermodynamic parameters is introduced for dynamic data. The examples of equilibria intertwined with kinetic chemical mechanisms include molecular charge transfer complex formation reactions, pH-dependent degradation of chemical compounds and tautomerization kinetics in micellar solutions. Model-based global analysis with the possibility of calculating and embedding the equilibrium and kinetic parameters into the fitting algorithm has allowed the complete analysis of the complex reaction mechanisms. After the fitting process, the optimal equilibrium and kinetic parameters together with an estimate of their standard deviations have been obtained. This work opens up a promising new avenue for obtaining equilibrium constants through the kinetic data analysis for the kinetic reactions that involve equilibrium processes.
Ozone chemical equilibrium in the extended mesopause under the nighttime conditions
NASA Astrophysics Data System (ADS)
Belikovich, M. V.; Kulikov, M. Yu.; Grygalashvyly, M.; Sonnemann, G. R.; Ermakova, T. S.; Nechaev, A. A.; Feigin, A. M.
2018-01-01
For retrieval of atomic oxygen and atomic hydrogen via ozone observations in the extended mesopause region (∼70-100 km) under nighttime conditions, an assumption on photochemical equilibrium of ozone is often used in research. In this work, an assumption on chemical equilibrium of ozone near mesopause region during nighttime is proofed. We examine 3D chemistry-transport model (CTM) annual calculations and determine the ratio between the correct (modeled) distributions of the O3 density and its equilibrium values depending on the altitude, latitude, and season. The results show that the retrieval of atomic oxygen and atomic hydrogen distributions using an assumption on ozone chemical equilibrium may lead to large errors below ∼81-87 km. We give simple and clear semi-empirical criterion for practical utilization of the lower boundary of the area with ozone's chemical equilibrium near mesopause.
NASA Technical Reports Server (NTRS)
Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.
1996-01-01
This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.
Liu, Hui; Chen, Fu; Sun, Huiyong; Li, Dan; Hou, Tingjun
2017-04-11
By means of estimators based on non-equilibrium work, equilibrium free energy differences or potentials of mean force (PMFs) of a system of interest can be computed from biased molecular dynamics (MD) simulations. The approach, however, is often plagued by slow conformational sampling and poor convergence, especially when the solvent effects are taken into account. Here, as a possible way to alleviate the problem, several widely used implicit-solvent models, which are derived from the analytic generalized Born (GB) equation and implemented in the AMBER suite of programs, were employed in free energy calculations based on non-equilibrium work and evaluated for their abilities to emulate explicit water. As a test case, pulling MD simulations were carried out on an alanine polypeptide with different solvent models and protocols, followed by comparisons of the reconstructed PMF profiles along the unfolding coordinate. The results show that when employing the non-equilibrium work method, sampling with an implicit-solvent model is several times faster and, more importantly, converges more rapidly than that with explicit water due to reduction of dissipation. Among the assessed GB models, the Neck variants outperform the OBC and HCT variants in terms of accuracy, whereas their computational costs are comparable. In addition, for the best-performing models, the impact of the solvent-accessible surface area (SASA) dependent nonpolar solvation term was also examined. The present study highlights the advantages of implicit-solvent models for non-equilibrium sampling.
2008-03-01
Molecular Dynamics Simulations 5 Theory: Equilibrium Molecular Dynamics Simulations 6 Theory: Non...Equilibrium Molecular Dynamics Simulations 8 Carbon Nanotube Simulations : Approach and results from equilibrium and non-equilibrium molecular dynamics ...touched from the perspective of molecular dynamics simulations . However, ordered systems such as “Carbon Nanotubes” have been investigated in terms
Equilibrium and nonequilibrium models on Solomon networks
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2016-05-01
We investigate the critical properties of the equilibrium and nonequilibrium systems on Solomon networks. The equilibrium and nonequilibrium systems studied here are the Ising and Majority-vote models, respectively. These systems are simulated by applying the Monte Carlo method. We calculate the critical points, as well as the critical exponents ratio γ/ν, β/ν and 1/ν. We find that both systems present identical exponents on Solomon networks and are of different universality class as the regular two-dimensional ferromagnetic model. Our results are in agreement with the Grinstein criterion for models with up and down symmetry on regular lattices.
He I lines in B stars - Comparison of non-local thermodynamic equilibrium models with observations
NASA Technical Reports Server (NTRS)
Heasley, J. N.; Timothy, J. G.; Wolff, S. C.
1982-01-01
Profiles of He gamma-gamma 4026, 4387, 4471, 4713, 5876, and 6678 have been obtained in 17 stars of spectral type B0-B5. Parameters of the nonlocal thermodynamic equilibrium models appropriate to each star are determined from the Stromgren index and fits to H-alpha line profiles. These parameters yield generally good fits to the observed He I line profiles, with the best fits being found for the blue He I lines where departures from local thermodynamic equilibrium are relatively small. For the two red lines it is found that, in the early B stars and in stars with log g less than 3.5, both lines are systematically stronger than predicted by the nonlocal thermodynamic equilibrium models.
Improved Simulation of the Pre-equilibrium Triton Emission in Nuclear Reactions Induced by Nucleons
NASA Astrophysics Data System (ADS)
Konobeyev, A. Yu.; Fischer, U.; Pereslavtsev, P. E.; Blann, M.
2014-04-01
A new approach is proposed for the calculation of non-equilibrium triton energy distributions in nuclear reactions induced by nucleons of intermediate energies. It combines models describing the nucleon pick-up, the coalescence and the triton knock-out processes. Emission and absorption rates for excited particles are represented by the pre-equilibrium hybrid model. The model of Sato, Iwamoto, Harada is used to describe the nucleon pick-up and the coalescence of nucleons from exciton configurations starting from (2p,1h) states. The contribution of the direct nucleon pick-up is described phenomenologically. Multiple pre-equilibrium emission of tritons is accounted for. The calculated triton energy distributions are compared with available experimental data.
Exploring the Use of Multiple Analogical Models when Teaching and Learning Chemical Equilibrium
ERIC Educational Resources Information Center
Harrison, Allan G.; De Jong, Onno
2005-01-01
This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the…
Equilibrium and nonequilibrium attractors for a discrete, selection-migration model
James F. Selgrade; James H. Roberds
2003-01-01
This study presents a discrete-time model for the effects of selection and immigration on the demographic and genetic compositions of a population. Under biologically reasonable conditions, it is shown that the model always has an equilibrium. Although equilibria for similar models without migration must have real eigenvalues, for this selection-migration model we...
Equilibrium electrodeformation of a spheroidal vesicle in an ac electric field
NASA Astrophysics Data System (ADS)
Nganguia, H.; Young, Y.-N.
2013-11-01
In this work, we develop a theoretical model to explain the equilibrium spheroidal deformation of a giant unilamellar vesicle (GUV) under an alternating (ac) electric field. Suspended in a leaky dielectric fluid, the vesicle membrane is modeled as a thin capacitive spheroidal shell. The equilibrium vesicle shape results from the balance between mechanical forces from the viscous fluid, the restoring elastic membrane forces, and the externally imposed electric forces. Our spheroidal model predicts a deformation-dependent transmembrane potential, and is able to capture large deformation of a vesicle under an electric field. A detailed comparison against both experiments and small-deformation (quasispherical) theory showed that the spheroidal model gives better agreement with experiments in terms of the dependence on fluid conductivity ratio, permittivity ratio, vesicle size, electric field strength, and frequency. The spheroidal model also allows for an asymptotic analysis on the crossover frequency where the equilibrium vesicle shape crosses over between prolate and oblate shapes. Comparisons show that the spheroidal model gives better agreement with experimental observations.
Local Stability of AIDS Epidemic Model Through Treatment and Vertical Transmission with Time Delay
NASA Astrophysics Data System (ADS)
Novi W, Cascarilla; Lestari, Dwi
2016-02-01
This study aims to explain stability of the spread of AIDS through treatment and vertical transmission model. Human with HIV need a time to positively suffer AIDS. The existence of a time, human with HIV until positively suffer AIDS can be delayed for a time so that the model acquired is the model with time delay. The model form is a nonlinear differential equation with time delay, SIPTA (susceptible-infected-pre AIDS-treatment-AIDS). Based on SIPTA model analysis results the disease free equilibrium point and the endemic equilibrium point. The disease free equilibrium point with and without time delay are local asymptotically stable if the basic reproduction number is less than one. The endemic equilibrium point will be local asymptotically stable if the time delay is less than the critical value of delay, unstable if the time delay is more than the critical value of delay, and bifurcation occurs if the time delay is equal to the critical value of delay.
Reactive solute transport in streams: 1. Development of an equilibrium- based model
Runkel, Robert L.; Bencala, Kenneth E.; Broshears, Robert E.; Chapra, Steven C.
1996-01-01
An equilibrium-based solute transport model is developed for the simulation of trace metal fate and transport in streams. The model is formed by coupling a solute transport model with a chemical equilibrium submodel based on MINTEQ. The solute transport model considers the physical processes of advection, dispersion, lateral inflow, and transient storage, while the equilibrium submodel considers the speciation and complexation of aqueous species, precipitation/dissolution and sorption. Within the model, reactions in the water column may result in the formation of solid phases (precipitates and sorbed species) that are subject to downstream transport and settling processes. Solid phases on the streambed may also interact with the water column through dissolution and sorption/desorption reactions. Consideration of both mobile (water-borne) and immobile (streambed) solid phases requires a unique set of governing differential equations and solution techniques that are developed herein. The partial differential equations describing physical transport and the algebraic equations describing chemical equilibria are coupled using the sequential iteration approach.
GENESIS: new self-consistent models of exoplanetary spectra
NASA Astrophysics Data System (ADS)
Gandhi, Siddharth; Madhusudhan, Nikku
2017-12-01
We are entering the era of high-precision and high-resolution spectroscopy of exoplanets. Such observations herald the need for robust self-consistent spectral models of exoplanetary atmospheres to investigate intricate atmospheric processes and to make observable predictions. Spectral models of plane-parallel exoplanetary atmospheres exist, mostly adapted from other astrophysical applications, with different levels of sophistication and accuracy. There is a growing need for a new generation of models custom-built for exoplanets and incorporating state-of-the-art numerical methods and opacities. The present work is a step in this direction. Here we introduce GENESIS, a plane-parallel, self-consistent, line-by-line exoplanetary atmospheric modelling code that includes (a) formal solution of radiative transfer using the Feautrier method, (b) radiative-convective equilibrium with temperature correction based on the Rybicki linearization scheme, (c) latest absorption cross-sections, and (d) internal flux and external irradiation, under the assumptions of hydrostatic equilibrium, local thermodynamic equilibrium and thermochemical equilibrium. We demonstrate the code here with cloud-free models of giant exoplanetary atmospheres over a range of equilibrium temperatures, metallicities, C/O ratios and spanning non-irradiated and irradiated planets, with and without thermal inversions. We provide the community with theoretical emergent spectra and pressure-temperature profiles over this range, along with those for several known hot Jupiters. The code can generate self-consistent spectra at high resolution and has the potential to be integrated into general circulation and non-equilibrium chemistry models as it is optimized for efficiency and convergence. GENESIS paves the way for high-fidelity remote sensing of exoplanetary atmospheres at high resolution with current and upcoming observations.
A Tightly Coupled Non-Equilibrium Magneto-Hydrodynamic Model for Inductively Coupled RF Plasmas
2016-02-29
development a tightly coupled magneto-hydrodynamic model for Inductively Coupled Radio- Frequency (RF) Plasmas. Non Local Thermodynamic Equilibrium (NLTE...for Inductively Coupled Radio-Frequency (RF) Plasmas. Non Local Thermodynamic Equilibrium (NLTE) effects are described based on a hybrid State-to-State... thermodynamic variable. This choice allows one to hide the non-linearity of the gas (total) thermal conductivity κ and can partially alle- 2 viate numerical
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B
2007-06-16
This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.
Almén, Anja; Båth, Magnus
2016-06-01
The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Li, Guanchen; von Spakovsky, Michael R
2016-09-01
This paper presents a nonequilibrium thermodynamic model for the relaxation of a local, isolated system in nonequilibrium using the principle of steepest entropy ascent (SEA), which can be expressed as a variational principle in thermodynamic state space. The model is able to arrive at the Onsager relations for such a system. Since no assumption of local equilibrium is made, the conjugate fluxes and forces are intrinsic to the subspaces of the system's state space and are defined using the concepts of hypoequilibrium state and nonequilibrium intensive properties, which describe the nonmutual equilibrium status between subspaces of the thermodynamic state space. The Onsager relations are shown to be a thermodynamic kinematic feature of the system independent of the specific details of the micromechanical dynamics. Two kinds of relaxation processes are studied with different constraints (i.e., conservation laws) corresponding to heat and mass diffusion. Linear behavior in the near-equilibrium region as well as nonlinear behavior in the far-from-equilibrium region are discussed. Thermodynamic relations in the equilibrium and near-equilibrium realm, including the Gibbs relation, the Clausius inequality, and the Onsager relations, are generalized to the far-from-equilibrium realm. The variational principle in the space spanned by the intrinsic conjugate fluxes and forces is expressed via the quadratic dissipation potential. As an application, the model is applied to the heat and mass diffusion of a system represented by a single-particle ensemble, which can also be applied to a simple system of many particles. Phenomenological transport coefficients are also derived in the near-equilibrium realm.
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2017-12-01
How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.
Application of optimal control strategies to HIV-malaria co-infection dynamics
NASA Astrophysics Data System (ADS)
Fatmawati; Windarto; Hanif, Lathifah
2018-03-01
This paper presents a mathematical model of HIV and malaria co-infection transmission dynamics. Optimal control strategies such as malaria preventive, anti-malaria and antiretroviral (ARV) treatments are considered into the model to reduce the co-infection. First, we studied the existence and stability of equilibria of the presented model without control variables. The model has four equilibria, namely the disease-free equilibrium, the HIV endemic equilibrium, the malaria endemic equilibrium, and the co-infection equilibrium. We also obtain two basic reproduction ratios corresponding to the diseases. It was found that the disease-free equilibrium is locally asymptotically stable whenever their respective basic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. sic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. Then, the optimal control theory for the model was derived analytically by using Pontryagin Maximum Principle. Numerical simulations of the optimal control strategies are also performed to illustrate the results. From the numerical results, we conclude that the best strategy is to combine the malaria prevention and ARV treatments in order to reduce malaria and HIV co-infection populations.
NASA Astrophysics Data System (ADS)
Li, He-Ping; Chen, Jian; Guo, Heng; Jiang, Dong-Jun; Zhou, Ming-Sheng; Department of Engineering Physics Team
2017-10-01
Ion extraction from a plasma under an externally applied electric field involve multi-particle and multi-field interactions, and has wide applications in the fields of materials processing, etching, chemical analysis, etc. In order to develop the high-efficiency ion extraction methods, it is indispensable to establish a feasible model to understand the non-equilibrium transportation processes of the charged particles and the evolutions of the space charge sheath during the extraction process. Most of the previous studies on the ion extraction process are mainly based on the electron-equilibrium fluid model, which assumed that the electrons are in the thermodynamic equilibrium state. However, it may lead to some confusions with neglecting the electron movement during the sheath formation process. In this study, a non-electron-equilibrium model is established to describe the transportation of the charged particles in a parallel-plate ion extraction process. The numerical results show that the formation of the Child-Langmuir sheath is mainly caused by the charge separation. And thus, the sheath shielding effect will be significantly weakened if the charge separation is suppressed during the extraction process of the charged particles.
[Aluminum mobilization models of forest yellow earth in South China].
Xin, Yan; Zhao, Yu; Duan, Lei
2009-07-15
For the application of acidification models in predicting effects of acid deposition and formulating control strategy in China, it is important selecting regionally applicable models of soil aluminum mobilization and determining their parameters. Based on the long-term monitoring results of soil water chemistry from four forested watersheds in South China, the applicability of a range of equilibriums describing aluminum mobilization was evaluated. The tested equilibriums included those for gibbsite, jurbanite, kaolinite, imogolite, and SOM-Al: Results show that the gibbsite equilibrium commonly used in several acidification models is not suitable for the typical forest soil in South China, while the modified empirical gibbsite equation is applicable with pK = - 2.40, a = 1.65 (for upper layer) and pK = - 2.82, a = 1.66 (for lower layers) at only pH > or = 4. Comparing with the empirical gibbsite equation, the other equilibriums do not perform better. It can also be seen that pAl varies slightly with pH decreases at pH < 4, which is unexplainable by any of these suggested equilibriums.
A "User-Friendly" Program for Vapor-Liquid Equilibrium.
ERIC Educational Resources Information Center
Da Silva, Francisco A.; And Others
1991-01-01
Described is a computer software package suitable for teaching and research in the area of multicomponent vapor-liquid equilibrium. This program, which has a complete database, can accomplish phase-equilibrium calculations using various models and graph the results. (KR)
Optimisation of driver actions in RWD race car including tyre thermodynamics
NASA Astrophysics Data System (ADS)
Maniowski, Michal
2016-04-01
The paper presents an innovative method for a lap time minimisation by using genetic algorithms for a multi objective optimisation of a race driver-vehicle model. The decision variables consist of 16 parameters responsible for actions of a professional driver (e.g. time traces for brake, accelerator and steering wheel) on a race track part with RH corner. Purpose-built, high fidelity, multibody vehicle model (called 'miMa') is described by 30 generalised coordinates and 440 parameters, crucial in motorsport. Focus is put on modelling of the tyre tread thermodynamics and its influence on race vehicle dynamics. Numerical example considers a Rear Wheel Drive BMW E36 prepared for track day events. In order to improve the section lap time (by 5%) and corner exit velocity (by 4%) a few different driving strategies are found depending on thermal conditions of semi-slick tyres. The process of the race driver adaptation to initially cold or hot tyres is explained.
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
3D equilibrium reconstruction with islands
NASA Astrophysics Data System (ADS)
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; Shafer, M. W.
2018-04-01
This paper presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wall limited L-mode case with an n = 1 error field applied. Flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase. ).
Analysis of the dynamics of multi-team Bertrand game with heterogeneous players
NASA Astrophysics Data System (ADS)
Ding, Zhanwen; Hang, Qinglan; Yang, Honglin
2011-06-01
In this article, we study the dynamics of a two-team Bertrand game with players having heterogeneous expectations. We study the equilibrium solutions and the conditions of their locally asymptotic stability. Numerical simulations are used to illustrate the complex behaviours of the proposed model of the Bertrand game. We demonstrate that some parameters of the model have great influence on the stability of Nash equilibrium and on the speed of convergence to Nash equilibrium. The chaotic behaviour of the model has been controlled by using feedback control method.
NASA Astrophysics Data System (ADS)
Yoshida, N.; Oki, T.
2016-12-01
Appropriate initial condition of soil moisture and water table depth are important factors to reduce uncertainty in hydrological simulations. Approaches to determine the initial water table depth have been developed because of difficulty to get information on global water table depth and soil moisture distributions. However, how is equilibrium soil moisture determined by climate conditions? We try to discuss this issue by using land surface model with representation of water table dynamics (MAT-GW). First, the global pattern of water table depth at equilibrium soil moisture in MAT-GW was verified. The water table depth in MAT-GW was deeper than the previous one at fundamentally arid region because the negative recharge and continuous baseflow made water table depth deeper. It indicated that the hydraulic conductivity used for estimating recharge and baseflow need to be reassessed in MAT-GW. In soil physics field, it is revealed that proper hydraulic property models for water retention and unsaturated hydraulic conductivity should be selected for each soil type. So, the effect of selecting hydraulic property models on terrestrial soil moisture and water table depth were examined.Clapp and Hornburger equation(CH eq.) and Van Genuchten equation(VG eq.) were used as representative hydraulic property models. Those models were integrated on MAT-GW and equilibrium soil moisture and water table depth with using each model were compared. The water table depth and soil moisture at grids which reached equilibrium in both simulations were analyzed. The equilibrium water table depth were deeper in VG eq. than CH eq. in most grids due to shape of hydraulic property models. Then, total soil moisture were smaller in VG eq. than CH eq. at almost all grids which water table depth reached equilibrium. It is interesting that spatial patterns which water table depth reached equilibrium or not were basically similar in both simulations but reverse patterns were shown in east and west part of America. Selection of each hydraulic property model based on soil types may compensate characteristic of models in initialization.
Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M
2017-05-01
The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.
Georgieva, Nedyalka; Yaneva, Zvezdelina; Dermendzhieva, Diyana
2017-09-01
The aim of the present study was to develop cresyl violet (CV)/bentonite composite system, to investigate the equilibrium sorption of the fluorescent dye on bentonite, to determine the characteristic equilibrium and thermodynamic parameters of the system by appropriate empirical isotherm models and to assess its pH-indicator properties. The absorption characteristics of CV solutions were investigated by UV/VIS spectrophotometer. Equilibrium experiments were conducted and the experimental data were modelled by six mathematical isotherm models. The analyses of the experimental data showed that bentonite exhibited significantly high capacity - 169.92 mg/g, towards CV. The encapsulation efficiency was 85%. The Langmuir, Flory-Huggins and El-Awady models best represented the experimental results. The free Gibbs energy of adsorption (ΔG o ) was calculated on the basis of the values of the equilibrium coefficients determined by the proposed models. The values of ΔG determined by the Langmuir, Temkin and Flory-Huggins models are within the range -20 to -40 kJ/mol, which indicates that the adsorption process is spontaneous and chemisorption takes place due to charge sharing or transfer from the dye molecules to the sorbent surface as a coordinate type of bond. The investigations of the obtained CV/bentonite hybrid systems for application as pH-markers showed satisfactory results.
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle.
Khantuleva, Tatiana A; Shalymov, Dmitry S
2017-03-06
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle
NASA Astrophysics Data System (ADS)
Khantuleva, Tatiana A.; Shalymov, Dmitry S.
2017-03-01
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed. This article is part of the themed issue 'Horizons of cybernetical physics'.
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle
Khantuleva, Tatiana A.
2017-01-01
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed. This article is part of the themed issue ‘Horizons of cybernetical physics’. PMID:28115617
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.
2010-05-01
Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.
Analysis of A Virus Dynamics Model
NASA Astrophysics Data System (ADS)
Zhang, Baolin; Li, Jianquan; Li, Jia; Zhao, Xin
2018-03-01
In order to more accurately characterize the virus infection in the host, a virus dynamics model with latency and virulence is established and analyzed in this paper. The positivity and boundedness of the solution are proved. After obtaining the basic reproduction number and the existence of infected equilibrium, the Lyapunov method and the LaSalle invariance principle are used to determine the stability of the uninfected equilibrium and infected equilibrium by constructing appropriate Lyapunov functions. We prove that, when the basic reproduction number does not exceed 1, the uninfected equilibrium is globally stable, the virus can be cleared eventually; when the basic reproduction number is more than 1, the infected equilibrium is globally stable, the virus will persist in the host at a certain level. The effect of virulence and latency on infection is also discussed.
On exchange rate misalignments in the Eurozone's peripheral countries
NASA Astrophysics Data System (ADS)
Grochová, Ladislava; Plecitá, Klára
2013-10-01
In this paper we model equilibrium exchange rates for the Eurozone's countries on the basis of the Behavioural Equilibrium Exchange Rate approach, which assumes, that equilibrium exchange rates are in the long run affected by economic fundamentals. To assess the degree of exchange rate misalignment for the Eurozone's peripheral countries - Portugal, Ireland, Greece and Spain - the gap between the actual and the modelled equilibrium exchange rate value is calculated. Our results show that Spain, Portugal and Ireland had their real exchange rates in equilibrium when they joined the Eurozone; however their real exchange rates have been persistently overvalued since the beginning of the 2000s. Greece, on the other hand, has experienced diminishing undervaluation at the beginning of its membership in the Eurozone and since 2009 has exhibited an overvalued real exchange rate.
Stability and bifurcation analysis on a ratio-dependent predator-prey model with time delay
NASA Astrophysics Data System (ADS)
Xu, Rui; Gan, Qintao; Ma, Zhien
2009-08-01
A ratio-dependent predator-prey model with time delay due to the gestation of the predator is investigated. By analyzing the corresponding characteristic equations, the local stability of a positive equilibrium and a semi-trivial boundary equilibrium is discussed, respectively. Further, it is proved that the system undergoes a Hopf bifurcation at the positive equilibrium. Using the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction of bifurcations and the stability and other properties of bifurcating periodic solutions. By means of an iteration technique, sufficient conditions are obtained for the global attractiveness of the positive equilibrium. By comparison arguments, the global stability of the semi-trivial equilibrium is also addressed. Numerical simulations are carried out to illustrate the main results.
Zhu, Huayang; Ricote, Sandrine; Coors, W Grover; Kee, Robert J
2015-01-01
A model-based interpretation of measured equilibrium conductivity and conductivity relaxation is developed to establish thermodynamic, transport, and kinetics parameters for multiple charged defect conducting (MCDC) ceramic materials. The present study focuses on 10% yttrium-doped barium zirconate (BZY10). In principle, using the Nernst-Einstein relationship, equilibrium conductivity measurements are sufficient to establish thermodynamic and transport properties. However, in practice it is difficult to establish unique sets of properties using equilibrium conductivity alone. Combining equilibrium and conductivity-relaxation measurements serves to significantly improve the quantitative fidelity of the derived material properties. The models are developed using a Nernst-Planck-Poisson (NPP) formulation, which enables the quantitative representation of conductivity relaxations caused by very large changes in oxygen partial pressure.
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
Design Optimisation of a Magnetic Field Based Soft Tactile Sensor
Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert
2017-01-01
This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787
Using a Hydrological Model to Determine Environmentally Safer Windows for Herbicide Application
J.L. Michael; M.C. Smith; W.G. Knisel; D.G. Neary; W.P. Fowler; D.J. Turton
1996-01-01
A modification of the GLEAMS model was used to determine application windows which would optimise efficacy and environmental safety for herbicide application to a forest site. Herbicide/soil partition coefficients were determined using soil samples collected from the study site for two herbicides (imazapyr, Koc=46, triclopyr ester, K
NASA Astrophysics Data System (ADS)
Attari Moghaddam, Alireza; Prat, Marc; Tsotsas, Evangelos; Kharaghani, Abdolreza
2017-12-01
The classical continuum modeling of evaporation in capillary porous media is revisited from pore network simulations of the evaporation process. The computed moisture diffusivity is characterized by a minimum corresponding to the transition between liquid and vapor transport mechanisms confirming previous interpretations. Also the study suggests an explanation for the scattering generally observed in the moisture diffusivity obtained from experimental data. The pore network simulations indicate a noticeable nonlocal equilibrium effect leading to a new interpretation of the vapor pressure-saturation relationship classically introduced to obtain the one-equation continuum model of evaporation. The latter should not be understood as a desorption isotherm as classically considered but rather as a signature of a nonlocal equilibrium effect. The main outcome of this study is therefore that nonlocal equilibrium two-equation model must be considered for improving the continuum modeling of evaporation.
NASA Technical Reports Server (NTRS)
Grossman, B.; Garrett, J.; Cinnella, P.
1989-01-01
Several versions of flux-vector split and flux-difference split algorithms were compared with regard to general applicability and complexity. Test computations were performed using curve-fit equilibrium air chemistry for an M = 5 high-temperature inviscid flow over a wedge, and an M = 24.5 inviscid flow over a blunt cylinder for test computations; for these cases, little difference in accuracy was found among the versions of the same flux-split algorithm. For flows with nonequilibrium chemistry, the effects of the thermodynamic model on the development of flux-vector split and flux-difference split algorithms were investigated using an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Several numerical examples are presented, including nonequilibrium air chemistry in a high-temperature shock tube and nonequilibrium hydrogen-air chemistry in a supersonic diffuser.
Population and prehistory II: Space-limited human populations in constant environments
Puleston, Cedric O.; Tuljapurkar, Shripad
2010-01-01
We present a population model to examine the forces that determined the quality and quantity of human life in early agricultural societies where cultivable area is limited. The model is driven by the non-linear and interdependent relationships between the age distribution of a population, its behavior and technology, and the nature of its environment. The common currency in the model is the production of food, on which age-specific rates of birth and death depend. There is a single nontrivial equilibrium population at which productivity balances caloric needs. One of the most powerful controls on equilibrium hunger level is fertility control. Gains against hunger are accompanied by decreases in population size. Increasing worker productivity does increase equilibrium population size but does not improve welfare at equilibrium. As a case study we apply the model to the population of a Polynesian valley before European contact. PMID:18598711
NASA Astrophysics Data System (ADS)
Sutimin; Khabibah, Siti; Munawwaroh, Dita Anis
2018-02-01
A harvesting fishery model is proposed to analyze the effects of the presence of red devil fish population, as a predator in an ecosystem. In this paper, we consider an ecological model of three species by taking into account two competing species and presence of a predator (red devil), the third species, which incorporates the harvesting efforts of each fish species. The stability of the dynamical system is discussed and the existence of biological and bionomic equilibrium is examined. The optimal harvest policy is studied and the solution is derived in the equilibrium case applying Pontryagin's maximal principle. The simulation results is presented to simulate the dynamical behavior of the model and show that the optimal equilibrium solution is globally asymptotically stable. The results show that the optimal harvesting effort is obtained regarding to bionomic and biological equilibrium.
Answer Sets in a Fuzzy Equilibrium Logic
NASA Astrophysics Data System (ADS)
Schockaert, Steven; Janssen, Jeroen; Vermeir, Dirk; de Cock, Martine
Since its introduction, answer set programming has been generalized in many directions, to cater to the needs of real-world applications. As one of the most general “classical” approaches, answer sets of arbitrary propositional theories can be defined as models in the equilibrium logic of Pearce. Fuzzy answer set programming, on the other hand, extends answer set programming with the capability of modeling continuous systems. In this paper, we combine the expressiveness of both approaches, and define answer sets of arbitrary fuzzy propositional theories as models in a fuzzification of equilibrium logic. We show that the resulting notion of answer set is compatible with existing definitions, when the syntactic restrictions of the corresponding approaches are met. We furthermore locate the complexity of the main reasoning tasks at the second level of the polynomial hierarchy. Finally, as an illustration of its modeling power, we show how fuzzy equilibrium logic can be used to find strong Nash equilibria.
Population and prehistory II: space-limited human populations in constant environments.
Puleston, Cedric O; Tuljapurkar, Shripad
2008-09-01
We present a population model to examine the forces that determined the quality and quantity of human life in early agricultural societies where cultivable area is limited. The model is driven by the non-linear and interdependent relationships between the age distribution of a population, its behavior and technology, and the nature of its environment. The common currency in the model is the production of food, on which age-specific rates of birth and death depend. There is a single non-trivial equilibrium population at which productivity balances caloric needs. One of the most powerful controls on equilibrium hunger level is fertility control. Gains against hunger are accompanied by decreases in population size. Increasing worker productivity does increase equilibrium population size but does not improve welfare at equilibrium. As a case study we apply the model to the population of a Polynesian valley before European contact.
Mathematical model for HIV spreads control program with ART treatment
NASA Astrophysics Data System (ADS)
Maimunah; Aldila, Dipo
2018-03-01
In this article, using a deterministic approach in a seven-dimensional nonlinear ordinary differential equation, we establish a mathematical model for the spread of HIV with an ART treatment intervention. In a simplified model, when no ART treatment is implemented, disease-free and the endemic equilibrium points were established analytically along with the basic reproduction number. The local stability criteria of disease-free equilibrium and the existing criteria of endemic equilibrium were analyzed. We find that endemic equilibrium exists when the basic reproduction number is larger than one. From the sensitivity analysis of the basic reproduction number of the complete model (with ART treatment), we find that the increased number of infected humans who follow the ART treatment program will reduce the basic reproduction number. We simulate this result also in the numerical experiment of the autonomous system to show how treatment intervention impacts the reduction of the infected population during the intervention time period.
NASA Astrophysics Data System (ADS)
Semenov, Mikhail A.; Stratonovitch, Pierre; Paul, Matthew J.
2017-04-01
Short periods of extreme weather, such as a spell of high temperature or drought during a sensitive stage of development, could result in substantial yield losses due to reduction in grain number and grain size. In a modelling study (Stratonovitch & Semenov 2015), heat tolerance around flowering in wheat was identified as a key trait for increased yield potential in Europe under climate change. Ji et all (Ji et al. 2010) demonstrated cultivar specific responses of yield to drought stress around flowering in wheat. They hypothesised that carbohydrate supply to anthers may be the key in maintaining pollen fertility and grain number in wheat. It was shown in (Nuccio et al. 2015) that genetically modified varieties of maize that increase the concentration of sucrose in ear spikelets, performed better under non-drought and drought conditions in field experiments. The objective of this modelling study was to assess potential benefits of tolerance to drought during reproductive development for wheat yield potential and yield stability across Europe. We used the Sirius wheat model to optimise wheat ideotypes for 2050 (HadGEM2, RCP8.5) climate scenarios at selected European sites. Eight cultivar parameters were optimised to maximise mean yields, including parameters controlling phenology, canopy growth and water limitation. At those sites where water could be limited, ideotypes sensitive to drought produced substantially lower mean yields and higher yield variability compare with tolerant ideotypes. Therefore, tolerance to drought during reproductive development is likely to be required for wheat cultivars optimised for the future climate in Europe in order to achieve high yield potential and yield stability.
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
Marsac, L; Chauvet, D; La Greca, R; Boch, A-L; Chaumoitre, K; Tanter, M; Aubry, J-F
2017-09-01
Transcranial brain therapy has recently emerged as a non-invasive strategy for the treatment of various neurological diseases, such as essential tremor or neurogenic pain. However, treatments require millimetre-scale accuracy. The use of high frequencies (typically ≥1 MHz) decreases the ultrasonic wavelength to the millimetre scale, thereby increasing the clinical accuracy and lowering the probability of cavitation, which improves the safety of the technique compared with the use of low-frequency devices that operate at 220 kHz. Nevertheless, the skull produces greater distortions of high-frequency waves relative to low-frequency waves. High-frequency waves require high-performance adaptive focusing techniques, based on modelling the wave propagation through the skull. This study sought to optimise the acoustical modelling of the skull based on computed tomography (CT) for a 1 MHz clinical brain therapy system. The best model tested in this article corresponded to a maximum speed of sound of 4000 m.s -1 in the skull bone, and it restored 86% of the optimal pressure amplitude on average in a collection of six human skulls. Compared with uncorrected focusing, the optimised non-invasive correction led to an average increase of 99% in the maximum pressure amplitude around the target and an average decrease of 48% in the distance between the peak pressure and the selected target. The attenuation through the skulls was also assessed within the bandwidth of the transducers, and it was found to vary in the range of 10 ± 3 dB at 800 kHz and 16 ± 3 dB at 1.3 MHz.
Chisholm, Rebecca H; Lorenzi, Tommaso; Clairambault, Jean
2016-11-01
Drug-induced drug resistance in cancer has been attributed to diverse biological mechanisms at the individual cell or cell population scale, relying on stochastically or epigenetically varying expression of phenotypes at the single cell level, and on the adaptability of tumours at the cell population level. We focus on intra-tumour heterogeneity, namely between-cell variability within cancer cell populations, to account for drug resistance. To shed light on such heterogeneity, we review evolutionary mechanisms that encompass the great evolution that has designed multicellular organisms, as well as smaller windows of evolution on the time scale of human disease. We also present mathematical models used to predict drug resistance in cancer and optimal control methods that can circumvent it in combined therapeutic strategies. Plasticity in cancer cells, i.e., partial reversal to a stem-like status in individual cells and resulting adaptability of cancer cell populations, may be viewed as backward evolution making cancer cell populations resistant to drug insult. This reversible plasticity is captured by mathematical models that incorporate between-cell heterogeneity through continuous phenotypic variables. Such models have the benefit of being compatible with optimal control methods for the design of optimised therapeutic protocols involving combinations of cytotoxic and cytostatic treatments with epigenetic drugs and immunotherapies. Gathering knowledge from cancer and evolutionary biology with physiologically based mathematical models of cell population dynamics should provide oncologists with a rationale to design optimised therapeutic strategies to circumvent drug resistance, that still remains a major pitfall of cancer therapeutics. This article is part of a Special Issue entitled "System Genetics" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmad, Norhidayah; Yong, Sing Hung; Ibrahim, Naimah; Ali, Umi Fazara Md; Ridwan, Fahmi Muhammad; Ahmad, Razi
2018-03-01
Oil palm empty fruit bunch (EFB) was successfully modified with phosphoric acid hydration followed by impregnation with copper oxide (CuO) to synthesize CuO modified catalytic carbon (CuO/EFBC) for low-temperature removal of nitric oxide (NO) from gas streams. CuO impregnation was optimised through response surface methodology (RSM) using Box-Behnken Design (BBD) in terms of metal loading (5-20%), sintering temperature (200-800˚C) and sintering time (2-6 hours). The model response for the variables was NO adsorption capacity, which was obtained from an up-flow column adsorption experiment with 100 mL/min flow of 500 ppm NO/He at different operating conditions. The optimum operating variables suggested by the model were 20% metal loading, 200˚C sintering temperature and 6 hours sintering time. A good agreement (R2 = 0.9625) was achieved between the experimental data and model prediction. ANOVA analysis indicated that the model terms (metal loading and sintering temperature) are significant (Prob.>F less than 0.05).
NASA Astrophysics Data System (ADS)
Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian
2018-05-01
Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.
Dousset, S; Thevenot, M; Pot, V; Simunek, J; Andreux, F
2007-12-07
In this study, displacement experiments of isoproturon were conducted in disturbed and undisturbed columns of a silty clay loam soil under similar rainfall intensities. Solute transport occurred under saturated conditions in the undisturbed soil and under unsaturated conditions in the sieved soil because of a greater bulk density of the compacted undisturbed soil compared to the sieved soil. The objective of this work was to determine transport characteristics of isoproturon relative to bromide tracer. Triplicate column experiments were performed with sieved (structure partially destroyed to simulate conventional tillage) and undisturbed (structure preserved) soils. Bromide experimental breakthrough curves were analyzed using convective-dispersive and dual-permeability (DP) models (HYDRUS-1D). Isoproturon breakthrough curves (BTCs) were analyzed using the DP model that considered either chemical equilibrium or non-equilibrium transport. The DP model described the bromide elution curves of the sieved soil columns well, whereas it overestimated the tailing of the bromide BTCs of the undisturbed soil columns. A higher degree of physical non-equilibrium was found in the undisturbed soil, where 56% of total water was contained in the slow-flow matrix, compared to 26% in the sieved soil. Isoproturon BTCs were best described in both sieved and undisturbed soil columns using the DP model combined with the chemical non-equilibrium. Higher degradation rates were obtained in the transport experiments than in batch studies, for both soils. This was likely caused by hysteresis in sorption of isoproturon. However, it cannot be ruled out that higher degradation rates were due, at least in part, to the adopted first-order model. Results showed that for similar rainfall intensity, physical and chemical non-equilibrium were greater in the saturated undisturbed soil than in the unsaturated sieved soil. Results also suggested faster transport of isoproturon in the undisturbed soil due to higher preferential flow and lower fraction of equilibrium sorption sites.
NASA Astrophysics Data System (ADS)
Dousset, S.; Thevenot, M.; Pot, V.; Šimunek, J.; Andreux, F.
2007-12-01
In this study, displacement experiments of isoproturon were conducted in disturbed and undisturbed columns of a silty clay loam soil under similar rainfall intensities. Solute transport occurred under saturated conditions in the undisturbed soil and under unsaturated conditions in the sieved soil because of a greater bulk density of the compacted undisturbed soil compared to the sieved soil. The objective of this work was to determine transport characteristics of isoproturon relative to bromide tracer. Triplicate column experiments were performed with sieved (structure partially destroyed to simulate conventional tillage) and undisturbed (structure preserved) soils. Bromide experimental breakthrough curves were analyzed using convective-dispersive and dual-permeability (DP) models (HYDRUS-1D). Isoproturon breakthrough curves (BTCs) were analyzed using the DP model that considered either chemical equilibrium or non-equilibrium transport. The DP model described the bromide elution curves of the sieved soil columns well, whereas it overestimated the tailing of the bromide BTCs of the undisturbed soil columns. A higher degree of physical non-equilibrium was found in the undisturbed soil, where 56% of total water was contained in the slow-flow matrix, compared to 26% in the sieved soil. Isoproturon BTCs were best described in both sieved and undisturbed soil columns using the DP model combined with the chemical non-equilibrium. Higher degradation rates were obtained in the transport experiments than in batch studies, for both soils. This was likely caused by hysteresis in sorption of isoproturon. However, it cannot be ruled out that higher degradation rates were due, at least in part, to the adopted first-order model. Results showed that for similar rainfall intensity, physical and chemical non-equilibrium were greater in the saturated undisturbed soil than in the unsaturated sieved soil. Results also suggested faster transport of isoproturon in the undisturbed soil due to higher preferential flow and lower fraction of equilibrium sorption sites.
Achillas, Ch; Vlachokostas, Ch; Aidonis, D; Moussiopoulos, N; Iakovou, E; Banias, G
2010-12-01
Due to the rapid growth of Waste Electrical and Electronic Equipment (WEEE) volumes, as well as the hazardousness of obsolete electr(on)ic goods, this type of waste is now recognised as a priority stream in the developed countries. Policy-making related to the development of the necessary infrastructure and the coordination of all relevant stakeholders is crucial for the efficient management and viability of individually collected waste. This paper presents a decision support tool for policy-makers and regulators to optimise electr(on)ic products' reverse logistics network. To that effect, a Mixed Integer Linear Programming mathematical model is formulated taking into account existing infrastructure of collection points and recycling facilities. The applicability of the developed model is demonstrated employing a real-world case study for the Region of Central Macedonia, Greece. The paper concludes with presenting relevant obtained managerial insights. Copyright © 2010 Elsevier Ltd. All rights reserved.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
NASA Astrophysics Data System (ADS)
Bhattacharyay, A.
2018-03-01
An alternative equilibrium stochastic dynamics for a Brownian particle in inhomogeneous space is derived. Such a dynamics can model the motion of a complex molecule in its conformation space when in equilibrium with a uniform heat bath. The derivation is done by a simple generalization of the formulation due to Zwanzig for a Brownian particle in homogeneous heat bath. We show that, if the system couples to different number of bath degrees of freedom at different conformations then the alternative model gets derived. We discuss results of an experiment by Faucheux and Libchaber which probably has indicated possible limitation of the Boltzmann distribution as equilibrium distribution of a Brownian particle in inhomogeneous space and propose experimental verification of the present theory using similar methods.
NASA Astrophysics Data System (ADS)
Pot, V.; Šimůnek, J.; Benoit, P.; Coquet, Y.; Yra, A.; Martínez-Cordón, M.-J.
2005-12-01
Two series of displacement experiments with isoproturon and metribuzin herbicides were performed on two undisturbed grassed filter strip soil cores, under unsaturated steady-state flow conditions. Several rainfall intensities (0.070, 0.147, 0.161, 0.308 and 0.326 cm h - 1 ) were used. A water tracer (bromide) was simultaneously injected in each displacement experiment. A descriptive analysis of experimental breakthrough curves of bromide and herbicides combined with a modeling analysis showed an impact of rainfall intensity on the solute transport. Two contrasting physical non-equilibrium transport processes occurred. Multiple (three) porosity domains contributed to flow at the highest rainfall intensities, including preferential flow through macropore pathways. Macropores were not active any longer at intermediate and lowest velocities, and the observed preferential transport was described using dual-porosity-type models with a zero or low flow in the matrix domain. Chemical non-equilibrium transport of herbicides was found at all rainfall intensities. Significantly higher estimated values of degradation rate parameters as compared to batch data were correlated with the degree of non-equilibrium sorption. Experimental breakthrough curves were analyzed using different physical and chemical equilibrium and non-equilibrium transport models: convective-dispersive model (CDE), dual-porosity model (MIM), dual-permeability model (DP), triple-porosity, dual permeability model (DP-MIM); each combined with both chemical instantaneous and kinetic sorption.
Pre-equilibrium Longitudinal Flow in the IP-Glasma Framework for Pb+Pb Collisions at the LHC
NASA Astrophysics Data System (ADS)
McDonald, Scott; Shen, Chun; Fillion-Gourdeau, François; Jeon, Sangyong; Gale, Charles
2017-08-01
In this work, we debut a new implementation of IP-Glasma and quantify the pre-equilibrium longitudinal flow in the IP-Glasma framework. The saturation physics based IP-Glasma model naturally provides a non-zero initial longitudinal flow through its pre-equilibrium Yang-Mills evolution. A hybrid IP-Glasma+MUSIC+UrQMD frame-work is employed to test this new implementation against experimental data and to make further predictions about hadronic flow observables in Pb+Pb collisions at 5.02 TeV. Finally, the non-zero pre-equilibrium longitudinal flow of the IP-Glasma model is quantified, and its origin is briefly discussed.
Equilibrium Shapes of Large Trans-Neptunian Objects
NASA Astrophysics Data System (ADS)
Rambaux, Nicolas; Baguet, Daniel; Chambat, Frederic; Castillo-Rogez, Julie C.
2017-11-01
The large trans-Neptunian objects (TNO) with radii larger than 400 km are thought to be in hydrostatic equilibrium. Their shapes can provide clues regarding their internal structures that would reveal information on their formation and evolution. In this paper, we explore the equilibrium figures of five TNOs, and we show that the difference between the equilibrium figures of homogeneous and heterogeneous interior models can reach several kilometers for fast rotating and low density bodies. Such a difference could be measurable by ground-based techniques. This demonstrates the importance of developing the shape up to second and third order when modeling the shapes of large and rapid rotators.
An experiment on radioactive equilibrium and its modelling using the ‘radioactive dice’ approach
NASA Astrophysics Data System (ADS)
Santostasi, Davide; Malgieri, Massimiliano; Montagna, Paolo; Vitulo, Paolo
2017-07-01
In this article we describe an educational activity on radioactive equilibrium we performed with secondary school students (17-18 years old) in the context of a vocational guidance stage for talented students at the Department of Physics of the University of Pavia. Radioactive equilibrium is investigated experimentally by having students measure the activity of 214Bi from two different samples, obtained using different preparation procedures from an uraniferous rock. Students are guided in understanding the mathematical structure of radioactive equilibrium through a modelling activity in two parts. Before the lab measurements, a dice game, which extends the traditional ‘radioactive dice’ activity to the case of a chain of two decaying nuclides, is performed by students divided into small groups. At the end of the laboratory work, students design and run a simple spreadsheet simulation modelling the same basic radioactive chain with user defined decay constants. By setting the constants to realistic values corresponding to nuclides of the uranium decay chain, students can deepen their understanding of the meaning of the experimental data, and also explore the difference between cases of non-equilibrium, transient and secular equilibrium.
Mahboobi-Ardakan, Payman; Kazemian, Mahmood; Mehraban, Sattar
2017-01-01
During different planning periods, human resources factor has been considerably increased in the health-care sector. The main goal is to determine economic planning conditions and equilibrium growth for services level and specialized workforce resources in health-care sector and also to determine the gap between levels of health-care services and specialized workforce resources in the equilibrium growth conditions and their available levels during the periods of the first to fourth development plansin Iran. In the study after data collection, econometric methods and EViews version 8.0 were used for data processing. The used model was based on neoclassical economic growth model. The results indicated that during the former planning periods, although specialized workforce has been increased significantly in health-care sector, lack of attention to equilibrium growth conditions caused imbalance conditions for product level and specialized workforce in health-care sector. In the past development plans for health services, equilibrium conditions based on the full employment in the capital stock, and specialized labor are not considered. The government could act by choosing policies determined by the growth model to achieve equilibrium level in the field of human resources and services during the next planning periods.
Stability of differential susceptibility and infectivity epidemic models
Bonzi, B.; Fall, A. A.; Iggidr, Abderrahman; Sallet, Gauthier
2011-01-01
We introduce classes of differential susceptibility and infectivity epidemic models. These models address the problem of flows between the different susceptible, infectious and infected compartments and differential death rates as well. We prove the global stability of the disease free equilibrium when the basic reproduction ratio ≤ 1 and the existence and uniqueness of an endemic equilibrium when > 1. We also prove the global asymptotic stability of the endemic equilibrium for a differential susceptibility and staged progression infectivity model, when > 1. Our results encompass and generalize those of [18, 22]. AMS Subject Classification : 34A34,34D23,34D40,92D30 PMID:20148330
Effects of crowders on the equilibrium and kinetic properties of protein aggregation
NASA Astrophysics Data System (ADS)
Bridstrup, John; Yuan, Jian-Min
2016-08-01
The equilibrium and kinetic properties of protein aggregation systems in the presence of crowders are investigated using simple, illuminating models based on mass-action laws. Our model yields analytic results for equilibrium properties of protein aggregates, which fit experimental data of actin and ApoC-II with crowders reasonably well. When the effects of crowders on rate constants are considered, our kinetic model is in good agreement with experimental results for actin with dextran as the crowder. Furthermore, the model shows that as crowder volume fraction increases, the length distribution of fibrils becomes narrower and shifts to shorter values due to volume exclusion.
Jentzer, Jean-Baptiste; Alignan, Marion; Vaca-Garcia, Carlos; Rigal, Luc; Vilarem, Gérard
2015-01-01
Following the approval of steviol glycosides as a food additive in Europe in December 2011, large-scale stevia cultivation will have to be developed within the EU. Thus there is a need to increase the efficiency of stevia evaluation through germplasm enhancement and agronomic improvement programs. To address the need for faster and reproducible sample throughput, conditions for automated extraction of dried stevia leaves using Accelerated Solvent Extraction were optimised. A response surface methodology was used to investigate the influence of three factors: extraction temperature, static time and cycle number on the stevioside and rebaudioside A extraction yields. The model showed that all the factors had an individual influence on the yield. Optimum extraction conditions were set at 100 °C, 4 min and 1 cycle, which yielded 91.8% ± 3.4% of total extractable steviol glycosides analysed. An additional optimisation was achieved by reducing the grind size of the leaves giving a final yield of 100.8% ± 3.3%. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
2001-01-01
Given the predicted growth in air transportation, the potential exists for significant market niches for rotary wing subsonic vehicles. Technological advances which optimise rotorcraft aeromechanical behaviour can contribute significantly to both their commercial and military development, acceptance, and sales. Examples of the optimisation of rotorcraft aeromechanical behaviour which are of interest include the minimisation of vibration and/or loads. The reduction of rotorcraft vibration and loads is an important means to extend the useful life of the vehicle and to improve its ride quality. Although vibration reduction can be accomplished by using passive dampers and/or tuned masses, active closed-loop control has the potential to reduce vibration and loads throughout a.wider flight regime whilst requiring less additional weight to the aircraft man that obtained by using passive methads. It is ernphasised that the analysis described herein is applicable to all those rotorcraft aeromechanical behaviour optimisation problems for which the relationship between the harmonic control vector and the measurement vector can be adequately described by a neural-network model.
Variability estimation of urban wastewater biodegradable fractions by respirometry.
Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie
2005-11-01
This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.
Guillaume, Y C; Peyrin, E
2000-03-06
A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.
Comparison of two gas chromatograph models and analysis of binary data
NASA Technical Reports Server (NTRS)
Keba, P. S.; Woodrow, P. T.
1972-01-01
The overall objective of the gas chromatograph system studies is to generate fundamental design criteria and techniques to be used in the optimum design of the system. The particular tasks currently being undertaken are the comparison of two mathematical models of the chromatograph and the analysis of binary system data. The predictions of two mathematical models, an equilibrium absorption model and a non-equilibrium absorption model exhibit the same weaknesses in their inability to predict chromatogram spreading for certain systems. The analysis of binary data using the equilibrium absorption model confirms that, for the systems considered, superposition of predicted single component behaviors is a first order representation of actual binary data. Composition effects produce non-idealities which limit the rigorous validity of superposition.
Simulation and Optimization of an Astrophotonic Reformatter
NASA Astrophysics Data System (ADS)
Anagnos, Th; Harris, R. J.; Corrigan, M. K.; Reeves, A. P.; Townson, M. J.; MacLachlan, D. G.; Thomson, R. R.; Morris, T. J.; Schwab, C.; Quirrenbach, A.
2018-05-01
Image slicing is a powerful technique in astronomy. It allows the instrument designer to reduce the slit width of the spectrograph, increasing spectral resolving power whilst retaining throughput. Conventionally this is done using bulk optics, such as mirrors and prisms, however more recently astrophotonic components known as PLs and photonic reformatters have also been used. These devices reformat the MM input light from a telescope into SM outputs, which can then be re-arranged to suit the spectrograph. The PD is one such device, designed to reduce the dependence of spectrograph size on telescope aperture and eliminate modal noise. We simulate the PD, by optimising the throughput and geometrical design using Soapy and BeamProp. The simulated device shows a transmission between 8 and 20 %, depending upon the type of AO correction applied, matching the experimental results well. We also investigate our idealised model of the PD and show that the barycentre of the slit varies only slightly with time, meaning that the modal noise contribution is very low when compared to conventional fibre systems. We further optimise our model device for both higher throughput and reduced modal noise. This device improves throughput by 6.4 % and reduces the movement of the slit output by 50%, further improving stability. This shows the importance of properly simulating such devices, including atmospheric effects. Our work complements recent work in the field and is essential for optimising future photonic reformatters.
3D equilibrium reconstruction with islands
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; ...
2018-02-15
This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less
3D equilibrium reconstruction with islands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.
This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less
Sorption of biodegradation end products of nonylphenol polyethoxylates onto activated sludge.
Hung, Nguyen Viet; Tateda, Masafumi; Ike, Michihiko; Fujita, Masanori; Tsunoi, Shinji; Tanaka, Minoru
2004-01-01
Nonylphenol(NP), nonylphenoxy acetic acid (NP1EC), nonylphenol monoethoxy acetic acid (NP2EC), nonylphenol monoethoxylate (NP1EO) and nonylphenol diethoxylate (NP2EO) are biodegradation end products (BEPs) of nonionic surfactant nonylphenolpolyethoxylates (NPnEO). In this research, sorption of these compounds onto model activated sludge was characterized. Sorption equilibrium experiments showed that NP, NP1EO and NP2EO reached equilibrium in about 12 h, while equilibrium of NP1EC and NP2EC were reached earlier, in about 4 h. In sorption isotherm experiments, obtained equilibrium data at 28 degrees C fitted well to Freundlich sorption model for all investigated compounds. For NP1EC, in addition to Freundlich, equilibrium data also fitted well to Langmuir model. Linear sorption model was also tried, and equilibrium data of all NP, NP1EO, NP2EO and NP2EC except NP1EC fitted well to this model. Calculated Freundlich coefficient (K(F)) and linear sorption coefficient (K(D)) showed that sorption capacity of the investigated compounds were in order NP > NP2EO > NP1EO > NP1EC approximately NP2EC. For NP, NP1EO and NP2EO, high values of calculated K(F) and K(D) indicated an easy uptake of these compounds from aqueous phase onto activated sludge. Whereas, NP1EC and NP2EC with low values of K(F) and K(D) absorbed weakly to activated sludge and tended to preferably remain in aqueous phase.
Mume, Eskender; Lynch, Daniel E; Uedono, Akira; Smith, Suzanne V
2011-06-21
Understanding how the size, charge and number of available pores in porous material influences the uptake and release properties is important for optimising their design and ultimately their application. Unfortunately there are no standard methods for screening porous materials in solution and therefore formulations must be developed for each encapsulated agent. This study investigates the potential of a library of radiotracers (nuclear sensors) for assessing the binding properties of hollow silica shell materials. Uptake and release of Cu(2+) and Co(2+) and their respective complexes with polyazacarboxylate macrocycles (dota and teta) and a series of hexa aza cages (diamsar, sarar and bis-(p-aminobenzyl)-diamsar) from the hollow silica shells was monitored using their radioisotopic analogues. Coordination chemistry of the metal (M) species, subtle alterations in the molecular architecture of ligands (Ligand) and their resultant complexes (M-Ligand) were found to significantly influence their uptake over pH 3 to 9 at room temperature. Positively charged species were selectively and rapidly (within 10 min) absorbed at pH 7 to 9. Negatively charged species were preferentially absorbed at low pH (3 to 5). Rates of release varied for each nuclear sensor, and time to establish equilibrium varied from minutes to days. The subtle changes in design of the nuclear sensors proved to be a valuable tool for determining the binding properties of porous materials. The data support the development of a library of nuclear sensors for screening porous materials for use in optimising the design of porous materials and the potential of nuclear sensors for high through-put screening of materials.
NASA Astrophysics Data System (ADS)
Yuval; Bekhor, Shlomo; Broday, David M.
2013-11-01
Spatially detailed estimation of exposure to air pollutants in the urban environment is needed for many air pollution epidemiological studies. To benefit studies of acute effects of air pollution such exposure maps are required at high temporal resolution. This study introduces nonlinear optimisation framework that produces high resolution spatiotemporal exposure maps. An extensive traffic model output, serving as proxy for traffic emissions, is fitted via a nonlinear model embodying basic dispersion properties, to high temporal resolution routine observations of traffic-related air pollutant. An optimisation problem is formulated and solved at each time point to recover the unknown model parameters. These parameters are then used to produce a detailed concentration map of the pollutant for the whole area covered by the traffic model. Repeating the process for multiple time points results in the spatiotemporal concentration field. The exposure at any location and for any span of time can then be computed by temporal integration of the concentration time series at selected receptor locations for the durations of desired periods. The methodology is demonstrated for NO2 exposure using the output of a traffic model for the greater Tel Aviv area, Israel, and the half-hourly monitoring and meteorological data from the local air quality network. A leave-one-out cross-validation resulted in simulated half-hourly concentrations that are almost unbiased compared to the observations, with a mean error (ME) of 5.2 ppb, normalised mean error (NME) of 32%, 78% of the simulated values are within a factor of two (FAC2) of the observations, and the coefficient of determination (R2) is 0.6. The whole study period integrated exposure estimations are also unbiased compared with their corresponding observations, with ME of 2.5 ppb, NME of 18%, FAC2 of 100% and R2 that equals 0.62.
Dangerous nutrients: evolution of phytoplankton resource uptake subject to virus attack.
Menge, Duncan N L; Weitz, Joshua S
2009-03-07
Phytoplankton need multiple resources to grow and reproduce (such as nitrogen, phosphorus, and iron), but the receptors through which they acquire resources are, in many cases, the same channels through which viruses attack. Therefore, phytoplankton can face a bottom-up vs. top-down tradeoff in receptor allocation: Optimize resource uptake or minimize virus attack? We investigate this top-down vs. bottom-up tradeoff using an evolutionary ecology model of multiple essential resources, specialist viruses that attack through the resource receptors, and a phytoplankton population that can evolve to alter the fraction of receptors used for each resource/virus type. Without viruses present the singular continuously stable strategy is to allocate receptors such that resources are co-limiting, which also minimizes the equilibrium concentrations of both resources. Only one virus type can be present at equilibrium (because phytoplankton, in this model, are a single resource for viruses), and when a virus type is present, it controls the equilibrium phytoplankton population size. Despite this top-down control on equilibrium densities, bottom-up control determines the evolutionary outcome. Regardless of which virus type is present, the allocation strategy that yields co-limitation between the two resources is continuously stable. This is true even when the virus type attacking through the limiting resource channel is present, even though selection for co-limitation in this case decreases the equilibrium phytoplankton population and does not decrease the equilibrium concentration of the limiting resource. Therefore, although moving toward co-limitation and decreasing the equilibrium concentration of the limiting resource often co-occur in models, it is co-limitation, and not necessarily the lowest equilibrium concentration of the limiting resource, that is the result of selection. This result adds to the growing body of literature suggesting that co-limitation at equilibrium is a winning strategy.
Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices
NASA Astrophysics Data System (ADS)
Garcia Bertrand, Raquel
In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.
Testing and Implementation of Advanced Reynolds Stress Models
NASA Technical Reports Server (NTRS)
Speziale, Charles G.
1997-01-01
A research program was proposed for the testing and implementation of advanced turbulence models for non-equilibrium turbulent flows of aerodynamic importance that are of interest to NASA. Turbulence models that are being developed in connection with the Office of Naval Research ARI in Non-equilibrium are provided for implementation and testing in aerodynamic flows at NASA Langley Research Center. Close interactions were established with researchers at Nasa Langley RC and refinements to the models were made based on the results of these tests. The models that have been considered include two-equation models with an anisotropic eddy viscosity as well as full second-order closures. Three types of non-equilibrium corrections to the models have been considered in connection with the ARI on Nonequilibrium Turbulence: conducted for ONR.
Integration of environmental aspects in modelling and optimisation of water supply chains.
Koleva, Mariya N; Calderón, Andrés J; Zhang, Di; Styan, Craig A; Papageorgiou, Lazaros G
2018-04-26
Climate change becomes increasingly more relevant in the context of water systems planning. Tools are necessary to provide the most economic investment option considering the reliability of the infrastructure from technical and environmental perspectives. Accordingly, in this work, an optimisation approach, formulated as a spatially-explicit multi-period Mixed Integer Linear Programming (MILP) model, is proposed for the design of water supply chains at regional and national scales. The optimisation framework encompasses decisions such as installation of new purification plants, capacity expansion, and raw water trading schemes. The objective is to minimise the total cost incurring from capital and operating expenditures. Assessment of available resources for withdrawal is performed based on hydrological balances, governmental rules and sustainable limits. In the light of the increasing importance of reliability of water supply, a second objective, seeking to maximise the reliability of the supply chains, is introduced. The epsilon-constraint method is used as a solution procedure for the multi-objective formulation. Nash bargaining approach is applied to investigate the fair trade-offs between the two objectives and find the Pareto optimality. The models' capability is addressed through a case study based on Australia. The impact of variability in key input parameters is tackled through the implementation of a rigorous global sensitivity analysis (GSA). The findings suggest that variations in water demand can be more disruptive for the water supply chain than scenarios in which rainfalls are reduced. The frameworks can facilitate governmental multi-aspect decision making processes for the adequate and strategic investments of regional water supply infrastructure. Copyright © 2018. Published by Elsevier B.V.
Bifurcation analysis in SIR epidemic model with treatment
NASA Astrophysics Data System (ADS)
Balamuralitharan, S.; Radha, M.
2018-04-01
We investigated the bifurcation analysis of nonlinear system of SIR epidemic model with treatment. It is accepted that the treatment is corresponding to the quantity of infective which is below the limit and steady when the quantity of infective achieves the limit. We analyze about the Transcritical bifurcation which occurs at the disease free equilibrium point and Hopf bifurcation which occurs at endemic equilibrium point. Using MATLAB we show the picture of bifurcation at the disease free equilibrium point.
Thermodynamic evolution far from equilibrium
NASA Astrophysics Data System (ADS)
Khantuleva, Tatiana A.
2018-05-01
The presented model of thermodynamic evolution of an open system far from equilibrium is based on the modern results of nonequilibrium statistical mechanics, the nonlocal theory of nonequilibrium transport developed by the author and the Speed Gradient principle introduced in the theory of adaptive control. Transition to a description of the system internal structure evolution at the mesoscopic level allows a new insight at the stability problem of non-equilibrium processes. The new model is used in a number of specific tasks.
Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions.
Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M
2018-05-15
Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate.
Non-equilibrium dynamics from RPMD and CMD.
Welsch, Ralph; Song, Kai; Shi, Qiang; Althorpe, Stuart C; Miller, Thomas F
2016-11-28
We investigate the calculation of approximate non-equilibrium quantum time correlation functions (TCFs) using two popular path-integral-based molecular dynamics methods, ring-polymer molecular dynamics (RPMD) and centroid molecular dynamics (CMD). It is shown that for the cases of a sudden vertical excitation and an initial momentum impulse, both RPMD and CMD yield non-equilibrium TCFs for linear operators that are exact for high temperatures, in the t = 0 limit, and for harmonic potentials; the subset of these conditions that are preserved for non-equilibrium TCFs of non-linear operators is also discussed. Furthermore, it is shown that for these non-equilibrium initial conditions, both methods retain the connection to Matsubara dynamics that has previously been established for equilibrium initial conditions. Comparison of non-equilibrium TCFs from RPMD and CMD to Matsubara dynamics at short times reveals the orders in time to which the methods agree. Specifically, for the position-autocorrelation function associated with sudden vertical excitation, RPMD and CMD agree with Matsubara dynamics up to O(t 4 ) and O(t 1 ), respectively; for the position-autocorrelation function associated with an initial momentum impulse, RPMD and CMD agree with Matsubara dynamics up to O(t 5 ) and O(t 2 ), respectively. Numerical tests using model potentials for a wide range of non-equilibrium initial conditions show that RPMD and CMD yield non-equilibrium TCFs with an accuracy that is comparable to that for equilibrium TCFs. RPMD is also used to investigate excited-state proton transfer in a system-bath model, and it is compared to numerically exact calculations performed using a recently developed version of the Liouville space hierarchical equation of motion approach; again, similar accuracy is observed for non-equilibrium and equilibrium initial conditions.
Exploring the use of multiple analogical models when teaching and learning chemical equilibrium
NASA Astrophysics Data System (ADS)
Harrison, Allan G.; de Jong, Onno
2005-12-01
This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the observation of three consecutive Grade 12 lessons on chemical equilibrium, pre- and post-lesson interviews, and delayed student interviews. The key analogical models used in teaching were: the school dance; the sugar in a teacup; the pot of curry; and the busy highway. The lesson and interview data were subject to multiple, independent analyses and yielded the following outcomes: The teacher planned to use the students' prior knowledge wherever possible and he responded to student questions with stories and extended and enriched analogies. He planned to discuss where each analogy broke down but did not. The students enjoyed the teaching but built variable mental models of equilibrium and some of their analogical mappings were unreliable. A female student disliked masculine analogies, other students tended to see elements of the multiple models in isolation, and some did not recognize all the analogical mappings embedded in the teaching plan. Most students learned that equilibrium reactions are dynamic, occur in closed systems, and the forward and reverse reactions are balanced. We recommend the use of multiple analogies like these and insist that teachers always show where the analogy breaks down and carefully negotiate the conceptual outcomes.
Optimisation of environmental remediation: how to select and use the reference levels.
Balonov, M; Chipiga, L; Kiselev, S; Sneve, M; Yankovich, T; Proehl, G
2018-06-01
A number of past industrial activities and accidents have resulted in the radioactive contamination of large areas at many sites around the world, giving rise to a need for remediation. According to the International Commission on Radiological Protection (ICRP) and International Atomic Energy Agency (IAEA), such situations should be managed as existing exposure situations (ExESs). Control of exposure to the public in ExESs is based on the application of appropriate reference levels (RLs) for residual doses. The implementation of this potentially fruitful concept for the optimisation of remediation in various regions is hampered by a lack of practical experience and relevant guidance. This paper suggests a generic methodology for the selection of numeric values of relevant RLs both in terms of residual annual effective dose and derived RLs (DRLs) based on an appropriate dose assessment. The value for an RL should be selected in the range of the annual residual effective dose of 1-20 mSv, depending on the prevailing circumstances for the exposure under consideration. Within this range, RL values should be chosen by the following assessment steps: (a) assessment of the projected dose, i.e. the dose to a representative person without remedial actions by means of a realistic model as opposed to a conservative model; (b) modelling of the residual dose to a representative person following application of feasible remedial actions; and (c) selection of an RL value between the projected and residual doses, taking account of the prevailing social and economic conditions. This paper also contains some recommendations for practical implementation of the selected RLs for the optimisation of public protection. The suggested methodology used for the selection of RLs (in terms of dose) and the calculation of DRLs (in terms of activity concentration in food, ambient dose rate, etc) has been illustrated by a retrospective analysis of post-Chernobyl monitoring and modelling data from the Bryansk region, Russia, 2001. From this example, it follows that analysis of real data leads to the selection of an RL from a relatively narrow annual dose range (in this case, about 2-3 mSv), from which relevant DRLs can be calculated and directly used for optimisation of the remediation programme.
Optimisation and establishment of diagnostic reference levels in paediatric plain radiography
NASA Astrophysics Data System (ADS)
Paulo, Graciano do Nascimento Nobre
Purpose: This study aimed to propose Diagnostic Reference Levels (DRLs) in paediatric plain radiography and to optimise the most frequent paediatric plain radiography examinations in Portugal following an analysis and evaluation of current practice. Methods and materials: Anthropometric data (weight, patient height and thickness of the irradiated anatomy) was collected from 9,935 patients referred for a radiography procedure to one of the three dedicated paediatric hospitals in Portugal. National DRLs were calculated for the three most frequent X-ray procedures at the three hospitals: chest AP/PA projection; abdomen AP projection; pelvis AP projection. Exposure factors and patient dose were collected prospectively at the clinical sites. In order to analyse the relationship between exposure factors, the use of technical features and dose, experimental tests were made using two anthropomorphic phantoms: a) CIRSTM ATOM model 705; height: 110cm, weight: 19kg and b) Kyoto kagakuTM model PBU-60; height: 165cm, weight: 50kg. After phantom data collection, an objective image analysis was performed by analysing the variation of the mean value of the standard deviation, measured with OsiriX software (Pixmeo, Switzerland). After proposing new exposure criteria, a Visual Grading Characteristic image quality evaluation was performed blindly by four paediatric radiologists, each with a minimum of 10 years of professional experience, using anatomical criteria scoring. Results: DRLs by patient weight groups have been established for the first time. ESAKP75 DRLs for both patient age and weight groups were also obtained and are described in the thesis. Significant dose reduction was achieved through the implementation of an optimisation programme: an average reduction of 41% and 18% on KAPP75 and ESAKP75, respectively for chest plain radiography; an average reduction of 58% and 53% on KAPP75 and ESAKP75, respectively for abdomen plain radiography; and an average reduction of 47% and 48% on KAPP75 and ESAKP75, respectively for pelvis plain radiography. Conclusion: Portuguese DRLs for plain radiography were obtained for paediatric plain radiography (chest AP/PA, abdomen and pelvis). Experimental phantom tests identified adequate plain radiography exposure criteria, validated by objective and subjective image quality analysis. The new exposure criteria were put into practice in one of the paediatric hospitals, by introducing an optimisation programme. The implementation of the optimisation programme allowed a significant dose reduction to paediatric patients, without compromising image quality. (Abstract shortened by ProQuest.).
Pupils' Pressure Models and Their Implications for Instruction.
ERIC Educational Resources Information Center
Kariotoglou, P.; Psillos, D.
1993-01-01
Discusses a study designed to investigate pupils' conceptions about fluids and particularly liquids in equilibrium, with reference to the concept of pressure. Based upon the results obtained, several mental models of how pupils understand liquids in equilibrium were proposed. (ZWH)
NASA Astrophysics Data System (ADS)
Chen, Jiliang; Jiang, Fangming
2016-02-01
With a previously developed numerical model, we perform a detailed study of the heat extraction process in enhanced or engineered geothermal system (EGS). This model takes the EGS subsurface heat reservoir as an equivalent porous medium while it considers local thermal non-equilibrium between the rock matrix and the fluid flowing in the fractured rock mass. The application of local thermal non-equilibrium model highlights the temperature-difference heat exchange process occurring in EGS reservoirs, enabling a better understanding of the involved heat extraction process. The simulation results unravel the mechanism of preferential flow or short-circuit flow forming in homogeneously fractured reservoirs of different permeability values. EGS performance, e.g. production temperature and lifetime, is found to be tightly related to the flow pattern in the reservoir. Thermal compensation from rocks surrounding the reservoir contributes little heat to the heat transmission fluid if the operation time of an EGS is shorter than 15 years. We find as well the local thermal equilibrium model generally overestimates EGS performance and for an EGS with better heat exchange conditions in the heat reservoir, the heat extraction process acts more like the local thermal equilibrium process.
Statistical optimisation techniques in fatigue signal editing problem
NASA Astrophysics Data System (ADS)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.
2015-02-01
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.
O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.
2012-01-01
Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279
Statistical optimisation techniques in fatigue signal editing problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less
The venous equilibrium model is widely used to describe hepatic clearance (CLH) of chemicals metabolized by the liver. If chemical delivery to the tissue does not limit CLH, this model predicts that CLH will approximately equal the product of intrinsic metabolic clearance and a t...
Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar
NASA Astrophysics Data System (ADS)
Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd
2017-05-01
Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.
NASA Astrophysics Data System (ADS)
Ai, Cheng; Zhou, Jian; Zhang, Heng; Zhao, Xinbao; Pei, Yanling; Li, Shusuo; Gong, Shengkai
2016-01-01
The non-equilibrium solidification behaviors of five Ni-Al-Ta ternary model single crystal alloys with different Al contents were investigated by experimental analysis and theoretical calculation (by JMatPro) in this study. These model alloys respectively represented the γ' phase with various volume fractions (100%, 75%, 50%, 25% and 0%) at 900 °C. It was found that with decreasing Al content, liquidus temperature of experimental alloys first decreased and then increased. Meanwhile, the solidification range showed a continued downward trend. In addition, with decreasing Al content, the primary phases of non-equilibrium solidified model alloys gradually transformed from γ' phase to γ phase, and the area fraction of which first decreased and then increased. Moreover, the interdendritic/intercellular precipitation of model alloys changed from β phase (for 100% γ') to (γ+γ')Eutectic (for 75% γ'), (γ+γ')Eutectic+γ' (for 50% γ' and 25% γ') and none interdendritic precipitation (for 0% γ'), and the last stage non-equilibrium solidification sequence of model alloys was determined by the nominal Al content and different microsegregation behaviors of Al element.
The modeling and analysis of the word-of-mouth marketing
NASA Astrophysics Data System (ADS)
Li, Pengdeng; Yang, Xiaofan; Yang, Lu-Xing; Xiong, Qingyu; Wu, Yingbo; Tang, Yuan Yan
2018-03-01
As compared to the traditional advertising, word-of-mouth (WOM) communications have striking advantages such as significantly lower cost and much faster propagation, and this is especially the case with the popularity of online social networks. This paper focuses on the modeling and analysis of the WOM marketing. A dynamic model, known as the SIPNS model, capturing the WOM marketing processes with both positive and negative comments is established. On this basis, a measure of the overall profit of a WOM marketing campaign is proposed. The SIPNS model is shown to admit a unique equilibrium, and the equilibrium is determined. The impact of different factors on the equilibrium of the SIPNS model is illuminated through theoretical analysis. Extensive experimental results suggest that the equilibrium is much likely to be globally attracting. Finally, the influence of different factors on the expected overall profit of a WOM marketing campaign is ascertained both theoretically and experimentally. Thereby, some promotion strategies are recommended. To our knowledge, this is the first time the WOM marketing is treated in this way.
NASA Astrophysics Data System (ADS)
Li, Ni; Huai, Wenqing; Wang, Shaodan
2017-08-01
C2 (command and control) has been understood to be a critical military component to meet an increasing demand for rapid information gathering and real-time decision-making in a dynamically changing battlefield environment. In this article, to improve a C2 behaviour model's reusability and interoperability, a behaviour modelling framework was proposed to specify a C2 model's internal modules and a set of interoperability interfaces based on the C-BML (coalition battle management language). WTA (weapon target assignment) is a typical C2 autonomous decision-making behaviour modelling problem. Different from most WTA problem descriptions, here sensors were considered to be available resources of detection and the relationship constraints between weapons and sensors were also taken into account, which brought it much closer to actual application. A modified differential evolution (MDE) algorithm was developed to solve this high-dimension optimisation problem and obtained an optimal assignment plan with high efficiency. In case study, we built a simulation system to validate the proposed C2 modelling framework and interoperability interface specification. Also, a new optimisation solution was used to solve the WTA problem efficiently and successfully.
NASA Astrophysics Data System (ADS)
Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.
2016-10-01
In part one of this publication experimental results for a single-channel transfer line used at liquid helium (LHe) decant stations are presented. The transfer of LHe into mobile dewars is an unavoidable process since the places of storage and usage are generally located apart from each other. The experimental results have shown that reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus, generated helium cold gas has to be collected and reliquefied, demanding a huge amount of electrical energy. Although this transfer process is common in cryogenic laboratories, no existing code could be found to model it. Therefore, a thermohydraulic model has been developed to model the LHe flow at operating conditions using published heat transfer and pressure drop correlations. This paper covers the basic equations used to calculate heat transfer and pressure drop, as well as the validation of the thermohydraulic code, and its application within the optimisation process. The final transfer line design features reduced heat leak and pressure drop values based on a combined measurement and modelling campaign in the range of 0.112 < pin < 0.148 MPa, 190 < G < 450 kg/(m2 s), and 0.04 < xout < 0.12.
ASHEE: a compressible, Equilibrium-Eulerian model for volcanic ash plumes
NASA Astrophysics Data System (ADS)
Cerminara, M.; Esposti Ongaro, T.; Berselli, L. C.
2015-10-01
A new fluid-dynamic model is developed to numerically simulate the non-equilibrium dynamics of polydisperse gas-particle mixtures forming volcanic plumes. Starting from the three-dimensional N-phase Eulerian transport equations (Neri et al., 2003) for a mixture of gases and solid dispersed particles, we adopt an asymptotic expansion strategy to derive a compressible version of the first-order non-equilibrium model (Ferry and Balachandar, 2001), valid for low concentration regimes (particle volume fraction less than 10-3) and particles Stokes number (St, i.e., the ratio between their relaxation time and flow characteristic time) not exceeding about 0.2. The new model, which is called ASHEE (ASH Equilibrium Eulerian), is significantly faster than the N-phase Eulerian model while retaining the capability to describe gas-particle non-equilibrium effects. Direct numerical simulation accurately reproduce the dynamics of isotropic, compressible turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration and clustering of particles by turbulence, thus verifying the model reliability and suitability for the numerical simulation of high-Reynolds number and high-temperature regimes in presence of a dispersed phase. On the other hand, Large-Eddy Numerical Simulations of forced plumes are able to reproduce their observed averaged and instantaneous flow properties. In particular, the self-similar Gaussian radial profile and the development of large-scale coherent structures are reproduced, including the rate of turbulent mixing and entrainment of atmospheric air. Application to the Large-Eddy Simulation of the injection of the eruptive mixture in a stratified atmosphere describes some of important features of turbulent volcanic plumes, including air entrainment, buoyancy reversal, and maximum plume height. For very fine particles (St → 0, when non-equilibrium effects are negligible) the model reduces to the so-called dusty-gas model. However, coarse particles partially decouple from the gas phase within eddies (thus modifying the turbulent structure) and preferentially concentrate at the eddy periphery, eventually being lost from the plume margins due to the concurrent effect of gravity. By these mechanisms, gas-particle non-equilibrium processes are able to influence the large-scale behavior of volcanic plumes.
Calibrating reaction rates for the CREST model
NASA Astrophysics Data System (ADS)
Handley, Caroline A.; Christie, Michael A.
2017-01-01
The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.
ERIC Educational Resources Information Center
Reinfried, Sibylle; Tempelmann, Sebastian
2014-01-01
This paper provides a video-based learning process study that investigates the kinds of mental models of the atmospheric greenhouse effect 13-year-old learners have and how these mental models change with a learning environment, which is optimised in regard to instructional psychology. The objective of this explorative study was to observe and…
Emergency Management Operations Process Mapping: Public Safety Technical Program Study
2011-02-01
Enterprise Architectures in industry, and have been successfully applied to assist companies to optimise interdependencies and relationships between...model for more in-depth analysis of EM processes, and for use in tandem with other studies that apply modeling and simulation to assess EM...for use in tandem with other studies that apply modeling and simulation to assess EM operational effectiveness before and after changing elements
CFD analysis of laboratory scale phase equilibrium cell operation
NASA Astrophysics Data System (ADS)
Jama, Mohamed Ali; Nikiforow, Kaj; Qureshi, Muhammad Saad; Alopaeus, Ville
2017-10-01
For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process.: Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium.
CFD analysis of laboratory scale phase equilibrium cell operation.
Jama, Mohamed Ali; Nikiforow, Kaj; Qureshi, Muhammad Saad; Alopaeus, Ville
2017-10-01
For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium.
Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy
NASA Astrophysics Data System (ADS)
Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.
2017-08-01
We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95 <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase <200 ms and for changes in the breathing period of <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Johnston, Christopher O.; Thompson, Richard A.
2009-01-01
A description of models and boundary conditions required for coupling radiation and ablation physics to a hypersonic flow simulation is provided. Chemical equilibrium routines for varying elemental mass fraction are required in the flow solver to integrate with the equilibrium chemistry assumption employed in the ablation models. The capability also enables an equilibrium catalytic wall boundary condition in the non-ablating case. The paper focuses on numerical implementation issues using FIRE II, Mars return, and Apollo 4 applications to provide context for discussion. Variable relaxation factors applied to the Jacobian elements of partial equilibrium relations required for convergence are defined. Challenges of strong radiation coupling in a shock capturing algorithm are addressed. Results are presented to show how the current suite of models responds to a wide variety of conditions involving coupled radiation and ablation.
Eberl, Gérard
2016-08-01
The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology.
Dynamics of epidemic spreading model with drug-resistant variation on scale-free networks
NASA Astrophysics Data System (ADS)
Wan, Chen; Li, Tao; Zhang, Wu; Dong, Jing
2018-03-01
Considering the influence of the virus' drug-resistant variation, a novel SIVRS (susceptible-infected-variant-recovered-susceptible) epidemic spreading model with variation characteristic on scale-free networks is proposed in this paper. By using the mean-field theory, the spreading dynamics of the model is analyzed in detail. Then, the basic reproductive number R0 and equilibriums are derived. Studies show that the existence of disease-free equilibrium is determined by the basic reproductive number R0. The relationships between the basic reproductive number R0, the variation characteristic and the topology of the underlying networks are studied in detail. Furthermore, our studies prove the global stability of the disease-free equilibrium, the permanence of epidemic and the global attractivity of endemic equilibrium. Numerical simulations are performed to confirm the analytical results.
Overshoot in biological systems modelled by Markov chains: a non-equilibrium dynamic phenomenon.
Jia, Chen; Qian, Minping; Jiang, Daquan
2014-08-01
A number of biological systems can be modelled by Markov chains. Recently, there has been an increasing concern about when biological systems modelled by Markov chains will perform a dynamic phenomenon called overshoot. In this study, the authors found that the steady-state behaviour of the system will have a great effect on the occurrence of overshoot. They showed that overshoot in general cannot occur in systems that will finally approach an equilibrium steady state. They further classified overshoot into two types, named as simple overshoot and oscillating overshoot. They showed that except for extreme cases, oscillating overshoot will occur if the system is far from equilibrium. All these results clearly show that overshoot is a non-equilibrium dynamic phenomenon with energy consumption. In addition, the main result in this study is validated with real experimental data.
Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen
2017-01-01
Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766
Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian
2005-01-01
This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
Delay induced stability switch, multitype bistability and chaos in an intraguild predation model.
Shu, Hongying; Hu, Xi; Wang, Lin; Watmough, James
2015-12-01
In many predator-prey models, delay has a destabilizing effect and induces oscillations; while in many competition models, delay does not induce oscillations. By analyzing a rather simple delayed intraguild predation model, which combines both the predator-prey relation and competition, we show that delay in intraguild predation models promotes very complex dynamics. The delay can induce stability switches exhibiting a destabilizing role as well as a stabilizing role. It is shown that three types of bistability are possible: one stable equilibrium coexists with another stable equilibrium (node-node bistability); one stable equilibrium coexists with a stable periodic solution (node-cycle bistability); one stable periodic solution coexists with another stable periodic solution (cycle-cycle bistability). Numerical simulations suggest that delay can also induce chaos in intraguild predation models.
NASA Technical Reports Server (NTRS)
Paquette, John A.; Nuth, Joseph A., III
2011-01-01
Classical nucleation theory has been used in models of dust nucleation in circumstellar outflows around oxygen-rich asymptotic giant branch stars. One objection to the application of classical nucleation theory (CNT) to astrophysical systems of this sort is that an equilibrium distribution of clusters (assumed by CNT) is unlikely to exist in such conditions due to a low collision rate of condensable species. A model of silicate grain nucleation and growth was modified to evaluate the effect of a nucleation flux orders of magnitUde below the equilibrium value. The results show that a lack of chemical equilibrium has only a small effect on the ultimate grain distribution.
A regressive storm model for extreme space weather
NASA Astrophysics Data System (ADS)
Terkildsen, Michael; Steward, Graham; Neudegg, Dave; Marshall, Richard
2012-07-01
Extreme space weather events, while rare, pose significant risk to society in the form of impacts on critical infrastructure such as power grids, and the disruption of high end technological systems such as satellites and precision navigation and timing systems. There has been an increased focus on modelling the effects of extreme space weather, as well as improving the ability of space weather forecast centres to identify, with sufficient lead time, solar activity with the potential to produce extreme events. This paper describes the development of a data-based model for predicting the occurrence of extreme space weather events from solar observation. The motivation for this work was to develop a tool to assist space weather forecasters in early identification of solar activity conditions with the potential to produce extreme space weather, and with sufficient lead time to notify relevant customer groups. Data-based modelling techniques were used to construct the model, and an extensive archive of solar observation data used to train, optimise and test the model. The optimisation of the base model aimed to eliminate false negatives (missed events) at the expense of a tolerable increase in false positives, under the assumption of an iterative improvement in forecast accuracy during progression of the solar disturbance, as subsequent data becomes available.
Kinematic models of the upper limb joints for multibody kinematics optimisation: An overview.
Duprey, Sonia; Naaim, Alexandre; Moissenet, Florent; Begon, Mickaël; Chèze, Laurence
2017-09-06
Soft tissue artefact (STA), i.e. the motion of the skin, fat and muscles gliding on the underlying bone, may lead to a marker position error reaching up to 8.7cm for the particular case of the scapula. Multibody kinematics optimisation (MKO) is one of the most efficient approaches used to reduce STA. It consists in minimising the distance between the positions of experimental markers on a subject skin and the simulated positions of the same markers embedded on a kinematic model. However, the efficiency of MKO directly relies on the chosen kinematic model. This paper proposes an overview of the different upper limb models available in the literature and a discussion about their applicability to MKO. The advantages of each joint model with respect to its biofidelity to functional anatomy are detailed both for the shoulder and the forearm areas. Models capabilities of personalisation and of adaptation to pathological cases are also discussed. Concerning model efficiency in terms of STA reduction in MKO algorithms, a lack of quantitative assessment in the literature is noted. In priority, future studies should concern the evaluation and quantification of STA reduction depending on upper limb joint constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.
Integral Equation for the Equilibrium State of Colliding Electron Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warnock, Robert L.
2002-11-11
We study a nonlinear integral equation for the equilibrium phase distribution of stored colliding electron beams. It is analogous to the Haissinski equation, being derived from Vlasov-Fokker-Planck theory, but is quite different in form. We prove existence of a unique solution, thus the existence of a unique equilibrium state, for sufficiently small current. This is done for the Chao-Ruth model of the beam-beam interaction in one degree of freedom. We expect no difficulty in generalizing the argument to more realistic models.
Equilibrium and nonequilibrium models on solomon networks with two square lattices
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
We investigate the critical properties of the equilibrium and nonequilibrium two-dimensional (2D) systems on Solomon networks with both nearest and random neighbors. The equilibrium and nonequilibrium 2D systems studied here by Monte Carlo simulations are the Ising and Majority-vote 2D models, respectively. We calculate the critical points as well as the critical exponent ratios γ/ν, β/ν, and 1/ν. We find that numerically both systems present the same exponents on Solomon networks (2D) and are of different universality class than the regular 2D ferromagnetic model. Our results are in agreement with the Grinstein criterion for models with up and down symmetry on regular lattices.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
State-to-State Internal Energy Relaxation Following the Quantum-Kinetic Model in DSMC
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
A new model for chemical reactions, the Quantum-Kinetic (Q-K) model of Bird, has recently been introduced that does not depend on macroscopic rate equations or values of local flow field data. Subsequently, the Q-K model has been extended to include reactions involving charged species and electronic energy level transitions. Although this is a phenomenological model, it has been shown to accurately reproduce both equilibrium and non-equilibrium reaction rates. The usefulness of this model becomes clear as local flow conditions either exceed the conditions used to build previous models or when they depart from an equilibrium distribution. Presently, the applicability of the relaxation technique is investigated for the vibrational internal energy mode. The Forced Harmonic Oscillator (FHO) theory for vibrational energy level transitions is combined with the Q-K energy level transition model to accurately reproduce energy level transitions at a reduced computational cost compared to the older FHO models.
NASA Astrophysics Data System (ADS)
Couvidat, F.; Sartelet, K.
2014-01-01
The Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model is designed to be modular with different user options depending on the computing time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption on the aqueous phase of particles, activity coefficients, phase separation). Each surrogate can be hydrophilic (condenses only on the aqueous phase of particles), hydrophobic (condenses only on the organic phase of particles) or both (condenses on both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC thermodynamic model for short-range interactions and with the AIOMFAC parameterization for medium and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium and a dynamic representation of the organic aerosol. In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol (OA) is not at equilibrium with the gas phase because the organic phase could be semi-solid (very viscous liquid phase). The condensation or evaporation of organic compounds could then be limited by the diffusion in the organic phase due to the high viscosity. A dynamic representation of secondary organic aerosols (SOA) is used with OA divided into layers, the first layer at the center of the particle (slowly reaches equilibrium) and the final layer near the interface with the gas phase (quickly reaches equilibrium).
NASA Technical Reports Server (NTRS)
Hawley, Suzanne L.; Fisher, George H.
1993-01-01
Solar flare model atmospheres computed under the assumption of energetic equilibrium in the chromosphere are presented. The models use a static, one-dimensional plane parallel geometry and are designed within a physically self-consistent coronal loop. Assumed flare heating mechanisms include collisions from a flux of non-thermal electrons and x-ray heating of the chromosphere by the corona. The heating by energetic electrons accounts explicitly for variations of the ionized fraction with depth in the atmosphere. X-ray heating of the chromosphere by the corona incorporates a flare loop geometry by approximating distant portions of the loop with a series of point sources, while treating the loop leg closest to the chromospheric footpoint in the plane-parallel approximation. Coronal flare heating leads to increased heat conduction, chromospheric evaporation and subsequent changes in coronal pressure; these effects are included self-consistently in the models. Cooling in the chromosphere is computed in detail for the important optically thick HI, CaII and MgII transitions using the non-LTE prescription in the program MULTI. Hydrogen ionization rates from x-ray photo-ionization and collisional ionization by non-thermal electrons are included explicitly in the rate equations. The models are computed in the 'impulsive' and 'equilibrium' limits, and in a set of intermediate 'evolving' states. The impulsive atmospheres have the density distribution frozen in pre-flare configuration, while the equilibrium models assume the entire atmosphere is in hydrostatic and energetic equilibrium. The evolving atmospheres represent intermediate stages where hydrostatic equilibrium has been established in the chromosphere and corona, but the corona is not yet in energetic equilibrium with the flare heating source. Thus, for example, chromospheric evaporation is still in the process of occurring.
Modeling Secondary Organic Aerosols over Europe: Impact of Activity Coefficients and Viscosity
NASA Astrophysics Data System (ADS)
Kim, Y.; Sartelet, K.; Couvidat, F.
2014-12-01
Semi-volatile organic species (SVOC) can condense on suspended particulate materials (PM) in the atmosphere. The modeling of condensation/evaporation of SVOC often assumes that gas-phase and particle-phase concentrations are at equilibrium. However, recent studies show that secondary organic aerosols (SOA) may not be accurately represented by an equilibrium approach between the gas and particle phases, because organic aerosols in the particle phase may be very viscous. The condensation in the viscous liquid phase is limited by the diffusion from the surface of PM to its core. Using a surrogate approach to represent SVOC, depending on the user's choice, the secondary organic aerosol processor (SOAP) may assume equilibrium or model dynamically the condensation/evaporation between the gas and particle phases to take into account the viscosity of organic aerosols. The model is implemented in the three-dimensional chemistry-transport model of POLYPHEMUS. In SOAP, activity coefficients for organic mixtures can be computed using UNIFAC for short-range interactions between molecules and AIOMFAC to also take into account the effect of inorganic species on activity coefficients. Simulations over Europe are performed and POLYPHEMUS/SOAP is compared to POLYPHEMUS/H2O, which was previously used to model SOA using the equilibrium approach with activity coefficients from UNIFAC. Impacts of the dynamic approach on modeling SOA over Europe are evaluated. The concentrations of SOA using the dynamic approach are compared with those using the equilibrium approach. The increase of computational cost is also evaluated.
A Synthesis of Equilibrium and Historical Models of Landform Development.
ERIC Educational Resources Information Center
Renwick, William H.
1985-01-01
The synthesis of two approaches that can be used in teaching geomorphology is described. The equilibrium approach explains landforms and landform change in terms of equilibrium between landforms and controlling processes. The historical approach draws on climatic geomorphology to describe the effects of Quaternary climatic and tectonic events on…
With the aid of three atmospheric aerosol equilibrium models, we quantify the effect of metastable equilibrium states (efflorescence branch) in comparison to stable (deliquescence branch) on the partitioning of total nitrate between the gas and aerosol phases. On average, efflore...
Multi-Group Maximum Entropy Model for Translational Non-Equilibrium
NASA Technical Reports Server (NTRS)
Jayaraman, Vegnesh; Liu, Yen; Panesi, Marco
2017-01-01
The aim of the current work is to describe a new model for flows in translational non- equilibrium. Starting from the statistical description of a gas proposed by Boltzmann, the model relies on a domain decomposition technique in velocity space. Using the maximum entropy principle, the logarithm of the distribution function in each velocity sub-domain (group) is expressed with a power series in molecular velocity. New governing equations are obtained using the method of weighted residuals by taking the velocity moments of the Boltzmann equation. The model is applied to a spatially homogeneous Boltzmann equation with a Bhatnagar-Gross-Krook1(BGK) model collision operator and the relaxation of an initial non-equilibrium distribution to a Maxwellian is studied using the model. In addition, numerical results obtained using the model for a 1D shock tube problem are also reported.
The lagRST Model: A Turbulence Model for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.
2011-01-01
This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric
2016-10-17
Here, atmospheric electromagnetic pulse (EMP) events are important physical phenomena that occur through both man-made and natural processes. Radiation-induced currents and voltages in EMP can couple with electrical systems, such as those found in satellites, and cause significant damage. Due to the disruptive nature of EMP, it is important to accurately predict EMP evolution and propagation with computational models. CHAP-LA (Compton High Altitude Pulse-Los Alamos) is a state-of-the-art EMP code that solves Maxwell inline images equations for gamma source-induced electromagnetic fields in the atmosphere. In EMP, low-energy, conduction electrons constitute a conduction current that limits the EMP by opposing themore » Compton current. CHAP-LA calculates the conduction current using an equilibrium ohmic model. The equilibrium model works well at low altitudes, where the electron energy equilibration time is short compared to the rise time or duration of the EMP. At high altitudes, the equilibration time increases beyond the EMP rise time and the predicted equilibrium ionization rate becomes very large. The ohmic model predicts an unphysically large production of conduction electrons which prematurely and abruptly shorts the EMP in the simulation code. An electron swarm model, which implicitly accounts for the time evolution of the conduction electron energy distribution, can be used to overcome the limitations exhibited by the equilibrium ohmic model. We have developed and validated an electron swarm model previously in Pusateri et al. (2015). Here we demonstrate EMP damping behavior caused by the ohmic model at high altitudes and show improvements on high-altitude, upward EMP modeling obtained by integrating a swarm model into CHAP-LA.« less
NASA Astrophysics Data System (ADS)
Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric; Ji, Wei
2016-10-01
Atmospheric electromagnetic pulse (EMP) events are important physical phenomena that occur through both man-made and natural processes. Radiation-induced currents and voltages in EMP can couple with electrical systems, such as those found in satellites, and cause significant damage. Due to the disruptive nature of EMP, it is important to accurately predict EMP evolution and propagation with computational models. CHAP-LA (Compton High Altitude Pulse-Los Alamos) is a state-of-the-art EMP code that solves Maxwell
NASA Astrophysics Data System (ADS)
Carvalho, David Joao da Silva
The high dependence of Portugal from foreign energy sources (mainly fossil fuels), together with the international commitments assumed by Portugal and the national strategy in terms of energy policy, as well as resources sustainability and climate change issues, inevitably force Portugal to invest in its energetic self-sufficiency. The 20/20/20 Strategy defined by the European Union defines that in 2020 60% of the total electricity consumption must come from renewable energy sources. Wind energy is currently a major source of electricity generation in Portugal, producing about 23% of the national total electricity consumption in 2013. The National Energy Strategy 2020 (ENE2020), which aims to ensure the national compliance of the European Strategy 20/20/20, states that about half of this 60% target will be provided by wind energy. This work aims to implement and optimise a numerical weather prediction model in the simulation and modelling of the wind energy resource in Portugal, both in offshore and onshore areas. The numerical model optimisation consisted in the determination of which initial and boundary conditions and planetary boundary layer physical parameterizations options provide wind power flux (or energy density), wind speed and direction simulations closest to in situ measured wind data. Specifically for offshore areas, it is also intended to evaluate if the numerical model, once optimised, is able to produce power flux, wind speed and direction simulations more consistent with in situ measured data than wind measurements collected by satellites. This work also aims to study and analyse possible impacts that anthropogenic climate changes may have on the future wind energetic resource in Europe. The results show that the ECMWF reanalysis ERA-Interim are those that, among all the forcing databases currently available to drive numerical weather prediction models, allow wind power flux, wind speed and direction simulations more consistent with in situ wind measurements. It was also found that the Pleim-Xiu and ACM2 planetary boundary layer parameterizations are the ones that showed the best performance in terms of wind power flux, wind speed and direction simulations. This model optimisation allowed a significant reduction of the wind power flux, wind speed and direction simulations errors and, specifically for offshore areas, wind power flux, wind speed and direction simulations more consistent with in situ wind measurements than data obtained from satellites, which is a very valuable and interesting achievement. This work also revealed that future anthropogenic climate changes can negatively impact future European wind energy resource, due to tendencies towards a reduction in future wind speeds especially by the end of the current century and under stronger radiative forcing conditions.
Bahlman, Joseph W.; Swartz, Sharon M.; Riskin, Daniel K.; Breuer, Kenneth S.
2013-01-01
Gliding is an efficient form of travel found in every major group of terrestrial vertebrates. Gliding is often modelled in equilibrium, where aerodynamic forces exactly balance body weight resulting in constant velocity. Although the equilibrium model is relevant for long-distance gliding, such as soaring by birds, it may not be realistic for shorter distances between trees. To understand the aerodynamics of inter-tree gliding, we used direct observation and mathematical modelling. We used videography (60–125 fps) to track and reconstruct the three-dimensional trajectories of northern flying squirrels (Glaucomys sabrinus) in nature. From their trajectories, we calculated velocities, aerodynamic forces and force coefficients. We determined that flying squirrels do not glide at equilibrium, and instead demonstrate continuously changing velocities, forces and force coefficients, and generate more lift than needed to balance body weight. We compared observed glide performance with mathematical simulations that use constant force coefficients, a characteristic of equilibrium glides. Simulations with varying force coefficients, such as those of live squirrels, demonstrated better whole-glide performance compared with the theoretical equilibrium state. Using results from both the observed glides and the simulation, we describe the mechanics and execution of inter-tree glides, and then discuss how gliding behaviour may relate to the evolution of flapping flight. PMID:23256188
Bahlman, Joseph W; Swartz, Sharon M; Riskin, Daniel K; Breuer, Kenneth S
2013-03-06
Gliding is an efficient form of travel found in every major group of terrestrial vertebrates. Gliding is often modelled in equilibrium, where aerodynamic forces exactly balance body weight resulting in constant velocity. Although the equilibrium model is relevant for long-distance gliding, such as soaring by birds, it may not be realistic for shorter distances between trees. To understand the aerodynamics of inter-tree gliding, we used direct observation and mathematical modelling. We used videography (60-125 fps) to track and reconstruct the three-dimensional trajectories of northern flying squirrels (Glaucomys sabrinus) in nature. From their trajectories, we calculated velocities, aerodynamic forces and force coefficients. We determined that flying squirrels do not glide at equilibrium, and instead demonstrate continuously changing velocities, forces and force coefficients, and generate more lift than needed to balance body weight. We compared observed glide performance with mathematical simulations that use constant force coefficients, a characteristic of equilibrium glides. Simulations with varying force coefficients, such as those of live squirrels, demonstrated better whole-glide performance compared with the theoretical equilibrium state. Using results from both the observed glides and the simulation, we describe the mechanics and execution of inter-tree glides, and then discuss how gliding behaviour may relate to the evolution of flapping flight.
Mahboobi-Ardakan, Payman; Kazemian, Mahmood; Mehraban, Sattar
2017-01-01
CONTEXT: During different planning periods, human resources factor has been considerably increased in the health-care sector. AIMS: The main goal is to determine economic planning conditions and equilibrium growth for services level and specialized workforce resources in health-care sector and also to determine the gap between levels of health-care services and specialized workforce resources in the equilibrium growth conditions and their available levels during the periods of the first to fourth development plansin Iran. MATERIALS AND METHODS: In the study after data collection, econometric methods and EViews version 8.0 were used for data processing. The used model was based on neoclassical economic growth model. RESULTS: The results indicated that during the former planning periods, although specialized workforce has been increased significantly in health-care sector, lack of attention to equilibrium growth conditions caused imbalance conditions for product level and specialized workforce in health-care sector. CONCLUSIONS: In the past development plans for health services, equilibrium conditions based on the full employment in the capital stock, and specialized labor are not considered. The government could act by choosing policies determined by the growth model to achieve equilibrium level in the field of human resources and services during the next planning periods. PMID:28616419
Statistical physics of the spatial Prisoner's Dilemma with memory-aware agents
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2016-02-01
We introduce an analytical model to study the evolution towards equilibrium in spatial games, with `memory-aware' agents, i.e., agents that accumulate their payoff over time. In particular, we focus our attention on the spatial Prisoner's Dilemma, as it constitutes an emblematic example of a game whose Nash equilibrium is defection. Previous investigations showed that, under opportune conditions, it is possible to reach, in the evolutionary Prisoner's Dilemma, an equilibrium of cooperation. Notably, it seems that mechanisms like motion may lead a population to become cooperative. In the proposed model, we map agents to particles of a gas so that, on varying the system temperature, they randomly move. In doing so, we are able to identify a relation between the temperature and the final equilibrium of the population, explaining how it is possible to break the classical Nash equilibrium in the spatial Prisoner's Dilemma when considering agents able to increase their payoff over time. Moreover, we introduce a formalism to study order-disorder phase transitions in these dynamics. As result, we highlight that the proposed model allows to explain analytically how a population, whose interactions are based on the Prisoner's Dilemma, can reach an equilibrium far from the expected one; opening also the way to define a direct link between evolutionary game theory and statistical physics.
NASA Astrophysics Data System (ADS)
Balog, Ivan; Tarjus, Gilles; Tissier, Matthieu
2018-03-01
We show that, contrary to previous suggestions based on computer simulations or erroneous theoretical treatments, the critical points of the random-field Ising model out of equilibrium, when quasistatically changing the applied source at zero temperature, and in equilibrium are not in the same universality class below some critical dimension dD R≈5.1 . We demonstrate this by implementing a nonperturbative functional renormalization group for the associated dynamical field theory. Above dD R, the avalanches, which characterize the evolution of the system at zero temperature, become irrelevant at large distance, and hysteresis and equilibrium critical points are then controlled by the same fixed point. We explain how to use computer simulation and finite-size scaling to check the correspondence between in and out of equilibrium criticality in a far less ambiguous way than done so far.
Analytical expressions for the evolution of many-body quantum systems quenched far from equilibrium
NASA Astrophysics Data System (ADS)
Santos, Lea F.; Torres-Herrera, E. Jonathan
2017-12-01
Possible strategies to describe analytically the dynamics of many-body quantum systems out of equilibrium include the use of solvable models and of full random matrices. None of the two approaches represent actual realistic systems, but they serve as references for the studies of these ones. We take the second path and obtain analytical expressions for the survival probability, density imbalance, and out-of-time-ordered correlator. Using these findings, we then propose an approximate expression that matches very well numerical results for the evolution of realistic finite quantum systems that are strongly chaotic and quenched far from equilibrium. In the case of the survival probability, the expression proposed covers all different time scales, from the moment the system is taken out of equilibrium to the moment it reaches a new equilibrium. The realistic systems considered are described by one-dimensional spin-1/2 models.
Game-theoretic equilibrium analysis applications to deregulated electricity markets
NASA Astrophysics Data System (ADS)
Joung, Manho
This dissertation examines game-theoretic equilibrium analysis applications to deregulated electricity markets. In particular, three specific applications are discussed: analyzing the competitive effects of ownership of financial transmission rights, developing a dynamic game model considering the ramp rate constraints of generators, and analyzing strategic behavior in electricity capacity markets. In the financial transmission right application, an investigation is made of how generators' ownership of financial transmission rights may influence the effects of the transmission lines on competition. In the second application, the ramp rate constraints of generators are explicitly modeled using a dynamic game framework, and the equilibrium is characterized as the Markov perfect equilibrium. Finally, the strategic behavior of market participants in electricity capacity markets is analyzed and it is shown that the market participants may exaggerate their available capacity in a Nash equilibrium. It is also shown that the more conservative the independent system operator's capacity procurement, the higher the risk of exaggerated capacity offers.
NASA Astrophysics Data System (ADS)
Donato, M. B.; Milasi, M.; Vitanza, C.
2010-09-01
An existence result of a Walrasian equilibrium for an integrated model of exchange, consumption and production is obtained. The equilibrium model is characterized in terms of a suitable generalized quasi-variational inequality; so the existence result comes from an original technique which takes into account tools of convex and set-valued analysis.
The partitioning of total nitrate (TNO3) and total ammonium (TNH4) between gas and aerosol phases is studied with two thermodynamic equilibrium models, ISORROPIA and AIM, and three datasets: high time-resolution measurement data from the 1999 Atlanta SuperSite Experiment and from...
Liu, Y; Allen, R
2002-09-01
The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.
Dynamical System Analysis of Reynolds Stress Closure Equations
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.
1997-01-01
In this paper, we establish the causality between the model coefficients in the standard pressure-strain correlation model and the predicted equilibrium states for homogeneous turbulence. We accomplish this by performing a comprehensive fixed point analysis of the modeled Reynolds stress and dissipation rate equations. The results from this analysis will be very useful for developing improved pressure-strain correlation models to yield observed equilibrium behavior.
GROUNDWATER MASS TRANSPORT AND EQUILIBRIUM CHEMISTRY MODEL FOR MULTICOMPONENT SYSTEMS
A mass transport model, TRANQL, for a multicomponent solution system has been developed. The equilibrium interaction chemistry is posed independently of the mass transport equations which leads to a set of algebraic equations for the chemistry coupled to a set of differential equ...
Trinh, T T; van Erp, T S; Bedeaux, D; Kjelstrup, S; Grande, C A
2015-03-28
Thermodynamic equilibrium for adsorption means that the chemical potential of gas and adsorbed phase are equal. A precise knowledge of the chemical potential is, however, often lacking, because the activity coefficient of the adsorbate is not known. Adsorption isotherms are therefore commonly fitted to ideal models such as the Langmuir, Sips or Henry models. We propose here a new procedure to find the activity coefficient and the equilibrium constant for adsorption which uses the thermodynamic factor. Instead of fitting the data to a model, we calculate the thermodynamic factor and use this to find first the activity coefficient. We show, using published molecular simulation data, how this procedure gives the thermodynamic equilibrium constant and enthalpies of adsorption for CO2(g) on graphite. We also use published experimental data to find similar thermodynamic properties of CO2(g) and of CH4(g) adsorbed on activated carbon. The procedure gives a higher accuracy in the determination of enthalpies of adsorption than ideal models do.
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
Equilibrium and Effective Climate Sensitivity
NASA Astrophysics Data System (ADS)
Rugenstein, M.; Bloch-Johnson, J.
2016-12-01
Atmosphere-ocean general circulation models, as well as the real world, take thousands of years to equilibrate to CO2 induced radiative perturbations. Equilibrium climate sensitivity - a fully equilibrated 2xCO2 perturbation - has been used for decades as a benchmark in model intercomparisons, as a test of our understanding of the climate system and paleo proxies, and to predict or project future climate change. Computational costs and limited time lead to the widespread practice of extrapolating equilibrium conditions from just a few decades of coupled simulations. The most common workaround is the "effective climate sensitivity" - defined through an extrapolation of a 150 year abrupt2xCO2 simulation, including the assumption of linear climate feedbacks. The definitions of effective and equilibrium climate sensitivity are often mixed up and used equivalently, and it is argued that "transient climate sensitivity" is the more relevant measure for predicting the next decades. We present an ongoing model intercomparison, the "LongRunMIP", to study century and millennia time scales of AOGCM equilibration and the linearity assumptions around feedback analysis. As a true ensemble of opportunity, there is no protocol and the only condition to participate is a coupled model simulation of any stabilizing scenario simulating more than 1000 years. Many of the submitted simulations took several years to conduct. As of July 2016 the contribution comprises 27 scenario simulations of 13 different models originating from 7 modeling centers, each between 1000 and 6000 years. To contribute, please contact the authors as soon as possible We present preliminary results, discussing differences between effective and equilibrium climate sensitivity, the usefulness of transient climate sensitivity, extrapolation methods, and the state of the coupled climate system close to equilibrium. Caption for the Figure below: Evolution of temperature anomaly and radiative imbalance of 22 simulations with 12 models (color indicates the model). 20 year moving average.
Non-Equilibrium Dynamics with Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Dong, Qiaoyuan
This work is motivated by the fact that the investigation of non-equilibrium phenomena in strongly correlated electron systems has developed into one of the most active and exciting branches of condensed matter physics as it provides rich new insights that could not be obtained from the study of equilibrium situations. However, a theoretical description of those phenomena is missing. Therefore, in this thesis, we develop a numerical method that can be used to study two minimal models--the Hubbard model and the Anderson impurity model with general parameter range and time dependence. We begin by introducing the theoretical framework and the general features of the Hubbard model. We then describe the dynamical mean field theory (DMFT), which was first invented by Georges in 1992. It provides a feasible way to approach strongly correlated electron systems and reduces the complexity of the calculations via a mapping of lattice models onto quantum impurity models subject to a self-consistency condition. We employ the non-equilibrium extension of DMFT and map the Hubbard model to the single impurity Anderson model (SIAM). Since the fundamental component of the DMFT method is a solver of the single impurity Anderson model, we continue with a description of the formalism to study the real-time dynamics of the impurity model staring at its thermal equilibrium state. We utilize the non-equilibrium strong-coupling perturbation theory and derive semi-analytical approximation methods such as the non-crossing approximation (NCA) and the one-crossing approximation (OCA). We then use the Quantum Monte-Carlo method (QMC) as a numerically exact method and present proper measurements of local observables, current and Green's functions. We perform simulations of the current after a quantum quench from equilibrium by rapidly applying a bias voltage in a wide range of initial temperatures. The current exhibits short equilibrium times and saturates upon the decrease of temperature at all times, indicating Kondo behavior both in the transient regime and in the steady state. However, this bare QMC solver suffers from a dynamical sign problem for long time propagations. To overcome the limitations of this bare treatment, we introduce the "Inchworm algorithm'', based on iteratively reusing the information obtained in previous steps to extend the propagation to longer times and stabilize the calculations. We show that this algorithm greatly reduces the required order for each simulation and re-scales the exponential challenge to quadratic in time. We introduce a method to compute Green's functions, spectral functions, and currents for inchworm Monte Carlo and show how systematic error assessments in real time can be obtained. We illustrate the capabilities of the algorithm with a study of the behavior of quantum impurities after an instantaneous voltage quench from a thermal equilibrium state. We conclude with the applications of the unbiased inchworm impurity solver to DMFT calculations. We employ the methods for a study of the one-band paramagnetic Hubbard model on the Bethe lattice in equilibrium, where the DMFT approximation becomes exact. We begin with a brief introduction of the Mott metal insulator phase diagram. We present the results of both real time Green's functions and spectral functions from our nonequilibrium calculations. We observe the metal-insulator crossover as the on-site interaction is increased and the formation of a quasi-particle peak as the temperature is lowered. We also illustrate the convergence of our algorithms in different aspects.
Once more on the equilibrium-point hypothesis (lambda model) for motor control.
Feldman, A G
1986-03-01
The equilibrium control hypothesis (lambda model) is considered with special reference to the following concepts: (a) the length-force invariant characteristic (IC) of the muscle together with central and reflex systems subserving its activity; (b) the tonic stretch reflex threshold (lambda) as an independent measure of central commands descending to alpha and gamma motoneurons; (c) the equilibrium point, defined in terms of lambda, IC and static load characteristics, which is associated with the notion that posture and movement are controlled by a single mechanism; and (d) the muscle activation area (a reformulation of the "size principle")--the area of kinematic and command variables in which a rank-ordered recruitment of motor units takes place. The model is used for the interpretation of various motor phenomena, particularly electromyographic patterns. The stretch reflex in the lambda model has no mechanism to follow-up a certain muscle length prescribed by central commands. Rather, its task is to bring the system to an equilibrium, load-dependent position. Another currently popular version defines the equilibrium point concept in terms of alpha motoneuron activity alone (the alpha model). Although the model imitates (as does the lambda model) spring-like properties of motor performance, it nevertheless is inconsistent with a substantial data base on intact motor control. An analysis of alpha models, including their treatment of motor performance in deafferented animals, reveals that they suffer from grave shortcomings. It is concluded that parameterization of the stretch reflex is a basis for intact motor control. Muscle deafferentation impairs this graceful mechanism though it does not remove the possibility of movement.
Non-Equilibrium Turbulence and Two-Equation Modeling
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
2011-01-01
Two-equation turbulence models are analyzed from the perspective of spectral closure theories. Kolmogorov theory provides useful information for models, but it is limited to equilibrium conditions in which the energy spectrum has relaxed to a steady state consistent with the forcing at large scales; it does not describe transient evolution between such states. Transient evolution is necessarily through nonequilibrium states, which can only be found from a theory of turbulence evolution, such as one provided by a spectral closure. When the departure from equilibrium is small, perturbation theory can be used to approximate the evolution by a two-equation model. The perturbation theory also gives explicit conditions under which this model can be valid, and when it will fail. Implications of the non-equilibrium corrections for the classic Tennekes-Lumley balance in the dissipation rate equation are drawn: it is possible to establish both the cancellation of the leading order Re1/2 divergent contributions to vortex stretching and enstrophy destruction, and the existence of a nonzero difference which is finite in the limit of infinite Reynolds number.
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.
A new bio-inspired optimisation algorithm: Bird Swarm Algorithm
NASA Astrophysics Data System (ADS)
Meng, Xian-Bing; Gao, X. Z.; Lu, Lihua; Liu, Yu; Zhang, Hengzhen
2016-07-01
A new bio-inspired algorithm, namely Bird Swarm Algorithm (BSA), is proposed for solving optimisation applications. BSA is based on the swarm intelligence extracted from the social behaviours and social interactions in bird swarms. Birds mainly have three kinds of behaviours: foraging behaviour, vigilance behaviour and flight behaviour. Birds may forage for food and escape from the predators by the social interactions to obtain a high chance of survival. By modelling these social behaviours, social interactions and the related swarm intelligence, four search strategies associated with five simplified rules are formulated in BSA. Simulations and comparisons based on eighteen benchmark problems demonstrate the effectiveness, superiority and stability of BSA. Some proposals for future research about BSA are also discussed.
Rodríguez, Araceli; García, Juan; Ovejero, Gabriel; Mestanza, María
2009-12-30
Activated carbon was utilized as adsorbent to remove anionic dye, Orange II (OII), and cationic dye, Methylene blue (MB), from aqueous solutions by adsorption. Batch experiments were conducted to study the effects of temperature (30-65 degrees C), initial concentration of adsorbate (300-500 mg L(-1)) and pH (3.0-9.0) on dyes adsorption. Equilibrium adsorption isotherms and kinetics were investigated. The equilibrium experimental data were analyzed by the Langmuir, Freundlich, Toth and Redlich-Peterson models. The kinetic data obtained with different carbon mass were analyzed using a pseudo-first order, pseudo-second order, intraparticle diffusion, Bangham and Chien-Clayton equations. The best results were achieved with the Langmuir isotherm equilibrium model and with the pseudo-second order kinetic model. The activated carbon was found to be very effective as adsorbent for MB and OII from aqueous solutions.
NASA Astrophysics Data System (ADS)
Bao, Shuangyou; Li, Kai; Ning, Ping; Peng, Jinhui; Jin, Xu; Tang, Lihong
2017-01-01
A novel hybrid material was fabricated using mercaptoamine-functionalised silica-coated magnetic nanoparticles (MAF-SCMNPs) and was effective in the extraction and recovery of mercury and lead ions from wastewater. The properties of this new magnetic material were explored using various characterisation and analysis methods. Adsorbent amounts, pH levels and initial concentrations were optimised to improve removal efficiency. Additionally, kinetics, thermodynamics and adsorption isotherms were investigated to determine the mechanism by which the fabricated MAF-SCMNPs adsorb heavy metal ions. The results revealed that MAF-SCMNPs were acid-resistant. Sorption likely occurred by chelation through the amine group and ion exchange between heavy metal ions and thiol functional groups on the nanoadsorbent surface. The equilibrium was attained within 120 min, and the adsorption kinetics showed pseudo-second-order (R2 > 0.99). The mercury and lead adsorption isotherms were in agreement with the Freundlich model, displaying maximum adsorption capacities of 355 and 292 mg/g, respectively. The maximum adsorptions took place at pH 5-6 and 6-7 for Hg(II) and Pb(II), respectively. The maximum adsorptions were observed at 10 mg and 12 mg adsorbent quantities for Hg(II) and Pb(II), respectively. The adsorption process was endothermic and spontaneous within the temperature range of 298-318 K. This work demonstrates a unique magnetic nano-adsorbent for the removal of Hg(II) and Pb(II) from wastewater.
Neng, N R; Nogueira, J M F
2012-01-01
The combination of bar adsorptive micro-extraction using activated carbon (AC) and polystyrene-divinylbenzene copolymer (PS-DVB) sorbent phases, followed by liquid desorption and large-volume injection gas chromatography coupled to mass spectrometry, under selected ion monitoring mode acquisition, was developed for the first time to monitor pharmaceutical and personal care products (PPCPs) in environmental water matrices. Assays performed on 25 mL water samples spiked (100 ng L(-1)) with caffeine, gemfibrozil, triclosan, propranolol, carbamazepine and diazepam, selected as model compounds, yielded recoveries ranging from 74% to 99% under optimised experimental conditions (equilibrium time, 16 h (1,000 rpm); matrix characteristics: pH 5, 5% NaCl for AC phase; LD: methanol/acetonitrile (1:1), 45 min). The analytical performance showed good precision (RSD < 18%), convenient detection limits (5-20 ng L(-1)) and excellent linear dynamic range (20-800 ng L(-1)) with remarkable determination coefficients (r(2) > 0.99), where the PS-DVB sorbent phase showed a much better efficiency. By using the standard addition methodology, the application of the present analytical approach on tap, ground, sea, estuary and wastewater samples allowed very good performance at the trace level. The proposed method proved to be a suitable sorption-based micro-extraction alternative for the analysis of priority pollutants with medium-polar to polar characteristics, showing to be easy to implement, reliable, sensitive and requiring a low sample volume to monitor PPCPs in water matrices.
General Equilibrium Models: Improving the Microeconomics Classroom
ERIC Educational Resources Information Center
Nicholson, Walter; Westhoff, Frank
2009-01-01
General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…
Tan, Weng Chun; Mat Isa, Nor Ashidi
2016-01-01
In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.
NASA Astrophysics Data System (ADS)
Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.
2016-02-01
Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.
Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia
2017-01-24
Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.
Emergent equilibrium in many-body optical bistability
NASA Astrophysics Data System (ADS)
Foss-Feig, Michael; Niroula, Pradeep; Young, Jeremy; Hafezi, Mohammad; Gorshkov, Alexey; Wilson, Ryan; Maghrebi, Mohammad
2017-04-01
Many-body systems constructed of quantum-optical building blocks can now be realized in experimental platforms ranging from exciton-polariton fluids to Rydberg gases, establishing a fascinating interface between traditional many-body physics and the non-equilibrium setting of cavity-QED. At this interface the standard intuitions of both fields are called into question, obscuring issues as fundamental as the role of fluctuations, dimensionality, and symmetry on the nature of collective behavior and phase transitions. We study the driven-dissipative Bose-Hubbard model, a minimal description of atomic, optical, and solid-state systems in which particle loss is countered by coherent driving. Despite being a lattice version of optical bistability-a foundational and patently non-equilibrium model of cavity-QED-the steady state possesses an emergent equilibrium description in terms of an Ising model. We establish this picture by identifying a limit in which the quantum dynamics is asymptotically equivalent to non-equilibrium Langevin equations, which support a phase transition described by model A of the Hohenberg-Halperin classification. Simulations of the Langevin equations corroborate this picture, producing results consistent with the behavior of a finite-temperature Ising model. M.F.M., J.T.Y., and A.V.G. acknowledge support by ARL CDQI, ARO MURI, NSF QIS, ARO, NSF PFC at JQI, and AFOSR. R.M.W. acknowledges partial support from the NSF under Grant No. PHYS-1516421. M.H. acknowledges support by AFOSR-MURI, ONR and Sloan Foundation.
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
Using a Microcomputer in the Teaching of Gas-Phase Equilibria: A Numerical Simulation.
ERIC Educational Resources Information Center
Hayward, Roger
1995-01-01
Describes a computer program that can model the equilibrium processes in the production of ammonia from hydrogen and nitrogen, sulfur trioxide from sulfur dioxide and oxygen, and the nitrogen dioxide-dinitrogen tetroxide equilibrium. Provides information about downloading the program ChemEquilibrium from the World Wide Web. (JRH)