Principles of Experimental Design for Big Data Analysis.
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2017-08-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.
Principles of Experimental Design for Big Data Analysis
Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G
2016-01-01
Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686
Transforming fragments into candidates: small becomes big in medicinal chemistry.
de Kloe, Gerdien E; Bailey, David; Leurs, Rob; de Esch, Iwan J P
2009-07-01
Fragment-based drug discovery (FBDD) represents a logical and efficient approach to lead discovery and optimisation. It can draw on structural, biophysical and biochemical data, incorporating a wide range of inputs, from precise mode-of-binding information on specific fragments to wider ranging pharmacophoric screening surveys using traditional HTS approaches. It is truly an enabling technology for the imaginative medicinal chemist. In this review, we analyse a representative set of 23 published FBDD studies that describe how low molecular weight fragments are being identified and efficiently transformed into higher molecular weight drug candidates. FBDD is now becoming warmly endorsed by industry as well as academia and the focus on small interacting molecules is making a big scientific impact.
Cesic: manufacturing study for next generation telescopes
NASA Astrophysics Data System (ADS)
Kroedel, M.; Lichtscheindl, J.; Mair, Hp.
2005-08-01
Under ESO - European Southern Observatory - contract ECM has performed a feasibility study for the manufacturing of Cesic primary and secondary mirror segments for the OWL-Telescope. The main issues of this study were to demonstrate the feasibility of the serial production (~ 2550 segments) of Cesic mirror segments under a certain schedule and cost optimisation aspect for the segments. Part of this study was also a pre-design of a manufacturing facility for this big amount of mirror segments. This study is limited only up to the manufacturing of a polishable surface, the feasibility of the polishing capability is not part of this study.
Biocatalysis engineering: the big picture.
Sheldon, Roger A; Pereira, Pedro C
2017-05-22
In this tutorial review we describe a holistic approach to the invention, development and optimisation of biotransformations utilising isolated enzymes. Increasing attention to applied biocatalysis is motivated by its numerous economic and environmental benefits. Biocatalysis engineering concerns the development of enzymatic systems as a whole, which entails engineering its different components: substrate engineering, medium engineering, protein (enzyme) engineering, biocatalyst (formulation) engineering, biocatalytic cascade engineering and reactor engineering.
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
NASA Astrophysics Data System (ADS)
Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert
2015-11-01
This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.
NASA Astrophysics Data System (ADS)
Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian
2017-08-01
With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.
NASA Astrophysics Data System (ADS)
McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten
2016-05-01
In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.
End-to-end System Performance Simulation: A Data-Centric Approach
NASA Astrophysics Data System (ADS)
Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier
2013-08-01
In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.
Xiao, Fuyuan; Aritsugi, Masayoshi; Wang, Qing; Zhang, Rong
2016-09-01
For efficient and sophisticated analysis of complex event patterns that appear in streams of big data from health care information systems and support for decision-making, a triaxial hierarchical model is proposed in this paper. Our triaxial hierarchical model is developed by focusing on hierarchies among nested event pattern queries with an event concept hierarchy, thereby allowing us to identify the relationships among the expressions and sub-expressions of the queries extensively. We devise a cost-based heuristic by means of the triaxial hierarchical model to find an optimised query execution plan in terms of the costs of both the operators and the communications between them. According to the triaxial hierarchical model, we can also calculate how to reuse the results of the common sub-expressions in multiple queries. By integrating the optimised query execution plan with the reuse schemes, a multi-query optimisation strategy is developed to accomplish efficient processing of multiple nested event pattern queries. We present empirical studies in which the performance of multi-query optimisation strategy was examined under various stream input rates and workloads. Specifically, the workloads of pattern queries can be used for supporting monitoring patients' conditions. On the other hand, experiments with varying input rates of streams can correspond to changes of the numbers of patients that a system should manage, whereas burst input rates can correspond to changes of rushes of patients to be taken care of. The experimental results have shown that, in Workload 1, our proposal can improve about 4 and 2 times throughput comparing with the relative works, respectively; in Workload 2, our proposal can improve about 3 and 2 times throughput comparing with the relative works, respectively; in Workload 3, our proposal can improve about 6 times throughput comparing with the relative work. The experimental results demonstrated that our proposal was able to process complex queries efficiently which can support health information systems and further decision-making. Copyright © 2016 Elsevier B.V. All rights reserved.
Tunesi, Simonetta; Baroni, Sergio; Boarini, Sandro
2016-09-01
The results of this case study are used to argue that waste management planning should follow a detailed process, adequately confronting the complexity of the waste management problems and the specificity of each urban area and of regional/national situations. To support the development or completion of integrated waste management systems, this article proposes a planning method based on: (1) the detailed analysis of waste flows and (2) the application of a life cycle assessment to compare alternative scenarios and optimise solutions. The evolution of the City of Bologna waste management system is used to show how this approach can be applied to assess which elements improve environmental performance. The assessment of the contribution of each waste management phase in the Bologna integrated waste management system has proven that the changes applied from 2013 to 2017 result in a significant improvement of the environmental performance mainly as a consequence of the optimised integration between materials and energy recovery: Global Warming Potential at 100 years (GWP100) diminishes from 21,949 to -11,169 t CO2-eq y(-1) and abiotic resources depletion from -403 to -520 t antimony-eq. y(-1) This study analyses at great detail the collection phase. Outcomes provide specific operational recommendations to policy makers, showing the: (a) relevance of the choice of the materials forming the bags for 'door to door' collection (for non-recycled low-density polyethylene bags 22 kg CO2-eq (tonne of waste)(-1)); (b) relatively low environmental impacts associated with underground tanks (3.9 kg CO2-eq (tonne of waste)(-1)); (c) relatively low impact of big street containers with respect to plastic bags (2.6 kg CO2-eq. (tonne of waste)(-1)). © The Author(s) 2016.
Subsampling for dataset optimisation
NASA Astrophysics Data System (ADS)
Ließ, Mareike
2017-04-01
Soil-landscapes have formed by the interaction of soil-forming factors and pedogenic processes. In modelling these landscapes in their pedodiversity and the underlying processes, a representative unbiased dataset is required. This concerns model input as well as output data. However, very often big datasets are available which are highly heterogeneous and were gathered for various purposes, but not to model a particular process or data space. As a first step, the overall data space and/or landscape section to be modelled needs to be identified including considerations regarding scale and resolution. Then the available dataset needs to be optimised via subsampling to well represent this n-dimensional data space. A couple of well-known sampling designs may be adapted to suit this purpose. The overall approach follows three main strategies: (1) the data space may be condensed and de-correlated by a factor analysis to facilitate the subsampling process. (2) Different methods of pattern recognition serve to structure the n-dimensional data space to be modelled into units which then form the basis for the optimisation of an existing dataset through a sensible selection of samples. Along the way, data units for which there is currently insufficient soil data available may be identified. And (3) random samples from the n-dimensional data space may be replaced by similar samples from the available dataset. While being a presupposition to develop data-driven statistical models, this approach may also help to develop universal process models and identify limitations in existing models.
Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M
2017-05-01
The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
A big data approach for climate change indicators processing in the CLIP-C project
NASA Astrophysics Data System (ADS)
D'Anca, Alessandro; Conte, Laura; Palazzo, Cosimo; Fiore, Sandro; Aloisio, Giovanni
2016-04-01
Defining and implementing processing chains with multiple (e.g. tens or hundreds of) data analytics operators can be a real challenge in many practical scientific use cases such as climate change indicators. This is usually done via scripts (e.g. bash) on the client side and requires climate scientists to take care of, implement and replicate workflow-like control logic aspects (which may be error-prone too) in their scripts, along with the expected application-level part. Moreover, the big amount of data and the strong I/O demand pose additional challenges related to the performance. In this regard, production-level tools for climate data analysis are mostly sequential and there is a lack of big data analytics solutions implementing fine-grain data parallelism or adopting stronger parallel I/O strategies, data locality, workflow optimization, etc. High-level solutions leveraging on workflow-enabled big data analytics frameworks for eScience could help scientists in defining and implementing the workflows related to their experiments by exploiting a more declarative, efficient and powerful approach. This talk will start introducing the main needs and challenges regarding big data analytics workflow management for eScience and will then provide some insights about the implementation of some real use cases related to some climate change indicators on large datasets produced in the context of the CLIP-C project - a EU FP7 project aiming at providing access to climate information of direct relevance to a wide variety of users, from scientists to policy makers and private sector decision makers. All the proposed use cases have been implemented exploiting the Ophidia big data analytics framework. The software stack includes an internal workflow management system, which coordinates, orchestrates, and optimises the execution of multiple scientific data analytics and visualization tasks. Real-time workflow monitoring execution is also supported through a graphical user interface. In order to address the challenges of the use cases, the implemented data analytics workflows include parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, and import/export of datasets in NetCDF format. The use cases have been implemented on a HPC cluster of 8-nodes (16-cores/node) of the Athena Cluster available at the CMCC Supercomputing Centre. Benchmark results will be also presented during the talk.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area
NASA Astrophysics Data System (ADS)
Khare, Vikas; Nema, Savita; Baredar, Prashant
2017-04-01
This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.
Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian
2005-01-01
This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.
NASA Astrophysics Data System (ADS)
Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.
2017-09-01
In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox
NASA Astrophysics Data System (ADS)
Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano
2018-03-01
Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
Design Optimisation of a Magnetic Field Based Soft Tactile Sensor
Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert
2017-01-01
This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787
Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E
2018-04-09
Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.
Sequential data access with Oracle and Hadoop: a performance comparison
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Canali, Luca; Grancher, Eric
2014-06-01
The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2014-01-01
Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
NASA Astrophysics Data System (ADS)
Grady, A.; Makarigakis, A.; Gersonius, B.
2015-09-01
This paper investigates how to optimise decentralisation for effective disaster risk reduction (DRR) in developing states. There is currently limited literature on empirical analysis of decentralisation for DRR. This paper evaluates decentralised governance for DRR in the case study of Indonesia and provides recommendations for its optimisation. Wider implications are drawn to optimise decentralisation for DRR in developing states more generally. A framework to evaluate the institutional and policy setting was developed which necessitated the use of a gap analysis, desk study and field investigation. Key challenges to decentralised DRR include capacity gaps at lower levels, low compliance with legislation, disconnected policies, issues in communication and coordination and inadequate resourcing. DRR authorities should lead coordination and advocacy on DRR. Sustainable multistakeholder platforms and civil society organisations should fill the capacity gap at lower levels. Dedicated and regulated resources for DRR should be compulsory.
Optimisation of process parameters on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.
Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.
Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne
2017-01-01
Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy
NASA Astrophysics Data System (ADS)
Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.
2017-08-01
We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95 <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase <200 ms and for changes in the breathing period of <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.
Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz
2018-03-01
The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Distributed convex optimisation with event-triggered communication in networked systems
NASA Astrophysics Data System (ADS)
Liu, Jiayun; Chen, Weisheng
2016-12-01
This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.
Rethinking big data: A review on the data quality and usage issues
NASA Astrophysics Data System (ADS)
Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng
2016-05-01
The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.
Microfluidic converging/diverging channels optimised for homogeneous extensional deformation.
Zografos, K; Pimenta, F; Alves, M A; Oliveira, M S N
2016-07-01
In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field.
Microfluidic converging/diverging channels optimised for homogeneous extensional deformation
Zografos, K.; Oliveira, M. S. N.
2016-01-01
In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523
Telecom Big Data for Urban Transport Analysis - a Case Study of Split-Dalmatia County in Croatia
NASA Astrophysics Data System (ADS)
Baučić, M.; Jajac, N.; Bućan, M.
2017-09-01
Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the Study of Tourist Movement and Traffic in Split-Dalmatia County in Croatia is carried out as a part of the "IPA Adriatic CBC//N.0086/INTERMODAL" project. This paper briefly explains the big data used in the study and the results of the study. Furthermore, this paper investigates the main considerations when using telecom customer big data: data privacy and data quality. The paper concludes with GIS visualisation and proposes the further use of big data used in the study.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
Optimisation of active suspension control inputs for improved vehicle handling performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor
2016-11-01
Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, Ekhtear; Islam, Khairul; Yeasmin, Fouzia
Chronic arsenic (As) exposure affects the endothelial system causing several diseases. Big endothelin-1 (Big ET-1), the biological precursor of endothelin-1 (ET-1) is a more accurate indicator of the degree of activation of the endothelial system. Effect of As exposure on the plasma Big ET-1 levels and its physiological implications have not yet been documented. We evaluated plasma Big ET-1 levels and their relation to hypertension and skin lesions in As exposed individuals in Bangladesh. A total of 304 study subjects from the As-endemic and non-endemic areas in Bangladesh were recruited for this study. As concentrations in water, hair and nailsmore » were measured by Inductively Coupled Plasma Mass Spectroscopy (ICP-MS). The plasma Big ET-1 levels were measured using a one-step sandwich enzyme immunoassay kit. Significant increase in Big ET-1 levels were observed with the increasing concentrations of As in drinking water, hair and nails. Further, before and after adjusting with different covariates, plasma Big ET-1 levels were found to be significantly associated with the water, hair and nail As concentrations of the study subjects. Big ET-1 levels were also higher in the higher exposure groups compared to the lowest (reference) group. Interestingly, we observed that Big ET-1 levels were significantly higher in the hypertensive and skin lesion groups compared to the normotensive and without skin lesion counterpart, respectively of the study subjects in As-endemic areas. Thus, this study demonstrated a novel dose–response relationship between As exposure and plasma Big ET-1 levels indicating the possible involvement of plasma Big ET-1 levels in As-induced hypertension and skin lesions. -- Highlights: ► Plasma Big ET-1 is an indicator of endothelial damage. ► Plasma Big ET-1 level increases dose-dependently in arsenic exposed individuals. ► Study subjects in arsenic-endemic areas with hypertension have elevated Big ET-1 levels. ► Study subjects with arsenic-induced skin lesions show elevated plasma Big ET-1 levels. ► Arsenic-induced hypertension and skin lesions may be linked to plasma Big ET-1 levels.« less
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-01-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction. PMID:27445157
The evolution of acute burn care - retiring the split skin graft.
Greenwood, J E
2017-07-01
The skin graft was born in 1869 and since then, surgeons have been using split skin grafts for wound repair. Nevertheless, this asset fails the big burn patient, who deserves an elastic, mobile and robust outcome but who receives the poorest possible outcome based on donor site paucity. Negating the need for the skin graft requires an autologous composite cultured skin and a material capable of temporising the burn wound for four weeks until the composite is produced. A novel, biodegradable polyurethane chemistry has been used to create two such products. This paper describes the design, production, optimisation and evaluation of several iterations of these products. The evaluation has occurred in a variety of models, both in vitro and in vivo, employing Hunterian scientific principles, and embracing Hunter's love and appreciation of comparative anatomy. The process has culminated in significant human experience in complex wounds and extensive burn injury. Used serially, the products offer robust and elastic healing in deep burns of any size within 6 weeks of injury.
Tier-2 Optimisation for Computational Density/Diversity and Big Data
NASA Astrophysics Data System (ADS)
Fay, R. B.; Bland, J.
2014-06-01
As the number of cores on chip continues to trend upwards and new CPU architectures emerge, increasing CPU density and diversity presents multiple challenges to site administrators. These include scheduling for massively multi-core systems (potentially including Graphical Processing Units (GPU), integrated and dedicated) and Many Integrated Core (MIC)) to ensure a balanced throughput of jobs while preserving overall cluster throughput, as well as the increasing complexity of developing for these heterogeneous platforms, and the challenge in managing this more complex mix of resources. In addition, meeting data demands as both dataset sizes increase and as the rate of demand scales with increased computational power requires additional performance from the associated storage elements. In this report, we evaluate one emerging technology, Solid State Drive (SSD) caching for RAID controllers, with consideration to its potential to assist in meeting evolving demand. We also briefly consider the broader developing trends outlined above in order to identify issues that may develop and assess what actions should be taken in the immediate term to address those.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
NASA Astrophysics Data System (ADS)
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-07-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction.
Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia
2017-01-24
Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.
ERIC Educational Resources Information Center
Brijlall, Deonarain; Ndlovu, Zanele
2013-01-01
This qualitative case study in a rural school in Umgungundlovu District in KwaZulu-Natal, South Africa, explored Grade 12 learners' mental constructions of mathematical knowledge during engagement with optimisation problems. Ten Grade 12 learners who do pure Mathemat-ics participated, and data were collected through structured activity sheets and…
Bird habitat relationships along a Great Basin elevational gradient
Dean E. Medin; Bruce L. Welch; Warren P. Clary
2000-01-01
Bird censuses were taken on 11 study plots along an elevational gradient ranging from 5,250 to 11,400 feet. Each plot represented a different vegetative type or zone: shadscale, shadscale-Wyoming big sagebrush, Wyoming big sagebrush, Wyoming big sagebrush-pinyon/juniper, pinyon/juniper, pinyon/juniper-mountain big sagebrush, mountain big sagebrush, mountain big...
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.
Rani, K; Jahnen, A; Noel, A; Wolf, D
2015-07-01
In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Meyer, Vera; Fiedler, Markus; Nitsche, Benjamin; King, Rudibert
2015-01-01
Living with limits. Getting more from less. Producing commodities and high-value products from renewable resources including waste. What is the driving force and quintessence of bioeconomy outlines the lifestyle and product portfolio of Aspergillus, a saprophytic genus, to which some of the top-performing microbial cell factories belong: Aspergillus niger, Aspergillus oryzae and Aspergillus terreus. What makes them so interesting for exploitation in biotechnology and how can they help us to address key challenges of the twenty-first century? How can these strains become trimmed for better growth on second-generation feedstocks and how can we enlarge their product portfolio by genetic and metabolic engineering to get more from less? On the other hand, what makes it so challenging to deduce biological meaning from the wealth of Aspergillus -omics data? And which hurdles hinder us to model and engineer industrial strains for higher productivity and better rheological performance under industrial cultivation conditions? In this review, we will address these issues by highlighting most recent findings from the Aspergillus research with a focus on fungal growth, physiology, morphology and product formation. Indeed, the last years brought us many surprising insights into model and industrial strains. They clearly told us that similar is not the same: there are different ways to make a hypha, there are more protein secretion routes than anticipated and there are different molecular and physical mechanisms which control polar growth and the development of hyphal networks. We will discuss new conceptual frameworks derived from these insights and the future scientific advances necessary to create value from Aspergillus Big Data.
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Kumaravel
2017-02-01
The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method
Health Informatics Scientists' Perception About Big Data Technology.
Minou, John; Routsis, Fotios; Gallos, Parisis; Mantas, John
2017-01-01
The aim of this paper is to present the perceptions of the Health Informatics Scientists about the Big Data Technology in Healthcare. An empirical study was conducted among 46 scientists to assess their knowledge about the Big Data Technology and their perceptions about using this technology in healthcare. Based on the study findings, 86.7% of the scientists had knowledge of Big data Technology. Furthermore, 59.1% of the scientists believed that Big Data Technology refers to structured data. Additionally, 100% of the population believed that Big Data Technology can be implemented in Healthcare. Finally, the majority does not know any cases of use of Big Data Technology in Greece while 57,8% of the them mentioned that they knew use cases of the Big Data Technology abroad.
Toward a Literature-Driven Definition of Big Data in Healthcare
Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel
2015-01-01
Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488
Toward a Literature-Driven Definition of Big Data in Healthcare.
Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel
2015-01-01
The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.
A Meta-Analysis of the Reliability of Free and For-Pay Big Five Scales.
Hamby, Tyler; Taylor, Wyn; Snowden, Audrey K; Peterson, Robert A
2016-01-01
The present study meta-analytically compared coefficient alpha reliabilities reported for free and for-pay Big Five scales. We collected 288 studies from five previous meta-analyses of Big Five traits and harvested 1,317 alphas from these studies. We found that free and for-pay scales measuring Big Five traits possessed comparable reliabilities. However, after we controlled for the numbers of items in the scales with the Spearman-Brown formula, we found that free scales possessed significantly higher alpha coefficients than for-pay scales for each of the Big Five traits. Thus, the study offers initial evidence that Big Five scales that are free more efficiently measure these traits for research purposes than do for-pay scales.
Optimisation techniques in vaginal cuff brachytherapy.
Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A
2009-11-01
The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
The Study of “big data” to support internal business strategists
NASA Astrophysics Data System (ADS)
Ge, Mei
2018-01-01
How is big data different from previous data analysis systems? The primary purpose behind traditional small data analytics that all managers are more or less familiar with is to support internal business strategies. But big data also offers a promising new dimension: to discover new opportunities to offer customers high-value products and services. The study focus to introduce some strategists which big data support to. Business decisions using big data can also involve some areas for analytics. They include customer satisfaction, customer journeys, supply chains, risk management, competitive intelligence, pricing, discovery and experimentation or facilitating big data discovery.
NASA Astrophysics Data System (ADS)
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
Cultural-based particle swarm for dynamic optimisation problems
NASA Astrophysics Data System (ADS)
Daneshyari, Moayed; Yen, Gary G.
2012-07-01
Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.
NASA Astrophysics Data System (ADS)
Bhansali, Gaurav; Singh, Bhanu Pratap; Kumar, Rajesh
2016-09-01
In this paper, the problem of microgrid optimisation with storage has been addressed in an unaccounted way rather than confining it to loss minimisation. Unitised regenerative fuel cell (URFC) systems have been studied and employed in microgrids to store energy and feed it back into the system when required. A value function-dependent on line losses, URFC system operational cost and stored energy at the end of the day are defined here. The function is highly complex, nonlinear and multi dimensional in nature. Therefore, heuristic optimisation techniques in combination with load flow analysis are used here to resolve the network and time domain complexity related with the problem. Particle swarm optimisation with the forward/backward sweep algorithm ensures optimal operation of microgrid thereby minimising the operational cost of the microgrid. Results are shown and are found to be consistently improving with evolution of the solution strategy.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
Optimisation of confinement in a fusion reactor using a nonlinear turbulence model
NASA Astrophysics Data System (ADS)
Highcock, E. G.; Mandell, N. R.; Barnes, M.
2018-04-01
The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.
Koo, B K; O'Connell, P E
2006-04-01
The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.
Pandey, Sonia; Swamy, S M Vijayendra; Gupta, Arti; Koli, Akshay; Patel, Swagat; Maulvi, Furqan; Vyas, Bhavin
2018-04-29
To optimise the Eudragit/Surelease ® -coated pH-sensitive pellets for controlled and target drug delivery to the colon tissue and to avoid frequent high dosing and associated side effects which restrict its use in the colorectal-cancer therapy. The pellets were prepared using extrusion-spheronisation technique. Box-Behnken and 3 2 full factorial designs were applied to optimise the process parameters [extruder sieve size, spheroniser-speed, and spheroniser-time] and the coating levels [%w/v of Eudragit S100/Eudragit-L100 and Surelease ® ], respectively, to achieve the smooth optimised size pellets with sustained drug delivery without prior drug release in upper gastrointestinal tract (GIT). The design proposed the optimised batch by selecting independent variables at; extruder sieve size (X 1 = 1 mm), spheroniser speed (X 2 = 900 revolutions per minute, rpm), and spheroniser time (X 3 = 15 min) to achieve pellet size of 0.96 mm, aspect ratio of 0.98, and roundness 97.42%. The 16%w/v coating strength of Surelease ® and 13%w/v coating strength of Eudragit showed pH-dependent sustained release up to 22.35 h (t 99% ). The organ distribution study showed the absence of the drug in the upper part of GIT tissue and the presence of high level of capecitabine in the caecum and colon tissue. Thus, the presence of Eudragit coat prevent the release of drug in stomach and the inner Surelease ® coat showed sustained drug release in the colon tissue. The study demonstrates the potential of optimised Eudragit/Surelease ® -coated capecitabine-pellets for effective colon-targeted delivery system to avoid frequent high dosing and associated systemic side effects of drug.
The Big Six Information Skills as a Metacognitive Scaffold: A Case Study.
ERIC Educational Resources Information Center
Wolf, Sara; Brush, Thomas; Saye, John
2003-01-01
Discussion of the Big Six information skills model focuses on a case study that examines the effect of Big6 on a class of eighth-grade students doing research on the African-American Civil Rights movement. Topics include information problem solving; metacognition; scaffolding; and Big6 as a metacognitive scaffold. (Author/LRW)
ERIC Educational Resources Information Center
Pettinger, Clare; Parsons, Julie M.; Cunningham, Miranda; Withers, Lyndsey; D'Aprano, Gia; Letherby, Gayle; Sutton, Carole; Whiteford, Andrew; Ayres, Richard
2017-01-01
Objective: High levels of social and economic deprivation are apparent in many UK cities, where there is evidence of certain "marginalised" communities suffering disproportionately from poor nutrition, threatening health. Finding ways to engage with these communities is essential to identify strategies to optimise wellbeing and life…
Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino
2017-09-19
The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.
NASA Astrophysics Data System (ADS)
Hsu, Chih-Ming
2014-12-01
Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
Big data are coming to psychiatry: a general introduction.
Monteith, Scott; Glenn, Tasha; Geddes, John; Bauer, Michael
2015-12-01
Big data are coming to the study of bipolar disorder and all of psychiatry. Data are coming from providers and payers (including EMR, imaging, insurance claims and pharmacy data), from omics (genomic, proteomic, and metabolomic data), and from patients and non-providers (data from smart phone and Internet activities, sensors and monitoring tools). Analysis of the big data will provide unprecedented opportunities for exploration, descriptive observation, hypothesis generation, and prediction, and the results of big data studies will be incorporated into clinical practice. Technical challenges remain in the quality, analysis and management of big data. This paper discusses some of the fundamental opportunities and challenges of big data for psychiatry.
Systemic solutions for multi-benefit water and environmental management.
Everard, Mark; McInnes, Robert
2013-09-01
The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.
2017-11-01
Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.
Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias
Chambers, David A.; Glasgow, Russell E.
2014-01-01
Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853
Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires
ERIC Educational Resources Information Center
Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie
2011-01-01
A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…
Optimisation of the supercritical extraction of toxic elements in fish oil.
Hajeb, P; Jinap, S; Shakibazadeh, Sh; Afsah-Hejri, L; Mohebbi, G H; Zaidul, I S M
2014-01-01
This study aims to optimise the operating conditions for the supercritical fluid extraction (SFE) of toxic elements from fish oil. The SFE operating parameters of pressure, temperature, CO2 flow rate and extraction time were optimised using a central composite design (CCD) of response surface methodology (RSM). High coefficients of determination (R²) (0.897-0.988) for the predicted response surface models confirmed a satisfactory adjustment of the polynomial regression models with the operation conditions. The results showed that the linear and quadratic terms of pressure and temperature were the most significant (p < 0.05) variables affecting the overall responses. The optimum conditions for the simultaneous elimination of toxic elements comprised a pressure of 61 MPa, a temperature of 39.8ºC, a CO₂ flow rate of 3.7 ml min⁻¹ and an extraction time of 4 h. These optimised SFE conditions were able to produce fish oil with the contents of lead, cadmium, arsenic and mercury reduced by up to 98.3%, 96.1%, 94.9% and 93.7%, respectively. The fish oil extracted under the optimised SFE operating conditions was of good quality in terms of its fatty acid constituents.
The Role of Gender in Youth Mentoring Relationship Formation and Duration
ERIC Educational Resources Information Center
Rhodes, Jean; Lowe, Sarah R.; Litchfield, Leon; Walsh-Samp, Kathy
2008-01-01
The role of gender in shaping the course and quality of adult-youth mentoring relationships was examined. The study drew on data from a large, random assignment evaluation of Big Brothers Big Sisters of America (BBSA) programs [Grossman, J. B., & Tierney, J. P. (1998). Does mentoring work? An impact study of the Big Brothers Big Sisters program.…
Optimisation of cavity parameters for lasers based on AlGaInAsP/InP solid solutions (λ = 1470 nm)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselov, D A; Ayusheva, K R; Shashkin, I S
2015-10-31
We have studied the effect of laser cavity parameters on the light–current characteristics of lasers based on the AlGaInAs/GaInAsP/InP solid solution system that emit in the spectral range 1400 – 1600 nm. It has been shown that optimisation of cavity parameters (chip length and front facet reflectivity) allows one to improve heat removal from the laser, without changing other laser characteristics. An increase in the maximum output optical power of the laser by 0.5 W has been demonstrated due to cavity design optimisation. (lasers)
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Application and Prospect of Big Data in Water Resources
NASA Astrophysics Data System (ADS)
Xi, Danchi; Xu, Xinyi
2017-04-01
Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.
An improved design method based on polyphase components for digital FIR filters
NASA Astrophysics Data System (ADS)
Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No
2017-11-01
This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Optimising the Blended Learning Environment: The Arab Open University Experience
ERIC Educational Resources Information Center
Hamdi, Tahrir; Abu Qudais, Mohammed
2018-01-01
This paper will offer some insights into possible ways to optimise the blended learning environment based on experience with this modality of teaching at Arab Open University/Jordan branch and also by reflecting upon the results of several meta-analytical studies, which have shown blended learning environments to be more effective than their face…
Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring
ERIC Educational Resources Information Center
Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer
2011-01-01
This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…
Concurrence of big data analytics and healthcare: A systematic review.
Mehta, Nishita; Pandit, Anil
2018-06-01
The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of Big Data analytics in healthcare. This is because, the usability studies have considered only qualitative approach which describes potential benefits but does not take into account the quantitative study. Also, majority of the studies were from developed countries which brings out the need for promotion of research on Healthcare Big Data analytics in developing countries. Copyright © 2018 Elsevier B.V. All rights reserved.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Gu, Fu; Ma, Buqing; Guo, Jianfeng; Summers, Peter A; Hall, Philip
2017-10-01
Management of Waste Electrical and Electronic Equipment (WEEE) is a vital part in solid waste management, there are still some difficult issues require attentionss. This paper investigates the potential of applying Internet of Things (IoT) and Big Data as the solutions to the WEEE management problems. The massive data generated during the production, consumption and disposal of Electrical and Electronic Equipment (EEE) fits the characteristics of Big Data. Through using the state-of-the-art communication technologies, the IoT derives the WEEE "Big Data" from the life cycle of EEE, and the Big Data technologies process the WEEE "Big Data" for supporting decision making in WEEE management. The framework of implementing the IoT and the Big Data technologies is proposed, with its multiple layers are illustrated. Case studies with the potential application scenarios of the framework are presented and discussed. As an unprecedented exploration, the combined application of the IoT and the Big Data technologies in WEEE management brings a series of opportunities as well as new challenges. This study provides insights and visions for stakeholders in solving the WEEE management problems under the context of IoT and Big Data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Little ice bodies, huge ice lands, and the up-going of the big water body
NASA Astrophysics Data System (ADS)
Ultee, E.; Bassis, J. N.
2017-12-01
Ice moving out of the huge ice lands causes the big water body to go up. That can cause bad things to happen in places close to the big water body - the land might even disappear! If that happens, people living close to the big water body might lose their homes. Knowing how much ice will come out of the huge ice lands, and when, can help the world plan for the up-going of the big water body. We study the huge ice land closest to us. All around the edge of that huge ice land, there are smaller ice bodies that control how much ice makes it into the big water body. Most ways of studying the huge ice land with computers struggle to tell the computer about those little ice bodies, but we have found a new way. We will talk about our way of studying little ice bodies and how their moving brings about up-going of the big water.
Santamaría, Eva; Estévez, Javier Alejandro; Riba, Jordi; Izquierdo, Iñaki; Valle, Marta
2017-01-01
To optimise a pharmacokinetic (PK) study design of rupatadine for 2-5 year olds by using a population PK model developed with data from a study in 6-11 year olds. The design optimisation was driven by the need to avoid children's discomfort in the study. PK data from 6-11 year olds with allergic rhinitis available from a previous study were used to construct a population PK model which we used in simulations to assess the dose to administer in a study in 2-5 year olds. In addition, an optimal design approach was used to determine the most appropriate number of sampling groups, sampling days, total samples and sampling times. A two-compartmental model with first-order absorption and elimination, with clearance dependent on weight adequately described the PK of rupatadine for 6-11 year olds. The dose selected for a trial in 2-5 year olds was 2.5 mg, as it provided a Cmax below the 3 ng/ml threshold. The optimal study design consisted of four groups of children (10 children each), a maximum sampling window of 2 hours in two clinic visits for drawing three samples on day 14 and one on day 28 coinciding with the final examination of the study. A PK study design was optimised in order to prioritise avoidance of discomfort for enrolled 2-5 year olds by taking only four blood samples from each child and minimising the length of hospital stays.
Current applications of big data in obstetric anesthesiology.
Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin
2017-06-01
The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.
Big Data Management in US Hospitals: Benefits and Barriers.
Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto
Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.
NASA Astrophysics Data System (ADS)
Li, Jing; Guo, Tiantian; Gao, Shuai; Jiang, Lin; Zhao, Zhijun; Wang, Yalin
2018-03-01
Big recycled aggregate self compacting concrete is a new type of recycled concrete, which has the advantages of low hydration heat and green environmental protection, but its bending behavior can be affected by different replacement rate. Therefor, in this paper, the research status of big Recycled aggregate self compacting concrete was systematically introduced, and the effect of different replacement rate of big recycled aggregate on failure mode, crack distribution and bending strength of the beam were studied through the bending behavior test of 4 big recycled aggregate self compacting concrete beams. The results show that: The crack distribution of the beam can be affected by the replacement rate; The failure modes of big recycled aggregate beams are the same as those of ordinary concrete; The plane section assumption is applicable to the big recycled aggregate self compacting concrete beam; The higher the replacement rate, the lower the bending strength of big recycled aggregate self compacting concrete beams.
ERIC Educational Resources Information Center
Herrera, Carla; Kauh, Tina J.; Cooney, Siobhan M.; Grossman, Jean Baldwin; McMaken, Jennifer
2008-01-01
High schools have recently become a popular source of mentors for school-based mentoring (SBM) programs. The high school Bigs program of Big Brothers Big Sisters of America, for example, currently involves close to 50,000 high-school-aged mentors across the country. While the use of these young mentors has several potential advantages, their age…
Aubin, P; Le Brun, G; Moldovan, F; Villette, J M; Créminon, C; Dumas, J; Homyrda, L; Soliman, H; Azizi, M; Fiet, J
1997-01-01
A sandwich-type enzyme immunoassay has been developed for measuring human big endothelin-1 (big ET-1) in human plasma and supernatant fluids from human cell cultures. Big ET-1 is the precursor of endothelin 1 (ET-1), the most potent vasoconstrictor known. A rabbit antibody raised against the big ET-1 COOH-terminus fragment was used as an immobilized antibody (anti-P16). The Fab' fragment of a monoclonal antibody (1B3) raised against the ET-1 loop fragment was used as the enzyme-labeled antibody, after being coupled to acetylcholinesterase. The lowest detectable value in the assay was 1.2 pg/mL (0.12 pg/well). The assay was highly specific for big ET-1, demonstrating no cross-reactivity with ET-1, <0.4% cross-reactivity with big endothelin-2 (big ET-2), and <0.1% with big endothelin-3 (big ET-3). We used this assay to evaluate the effect of two different postural positions (supine and standing) on plasma big ET-1 concentrations in 11 male and 11 female healthy subjects. Data analysis revealed that neither sex nor body position influenced plasma big ET-1 concentrations. This assay should thus permit the detection of possible variations in plasma concentrations of big ET-1 in certain pathologies and, in association with ET-1 assay, make possible in vitro study of endothelin-converting enzyme activity in cell models. Such studies could clarify the physiological and clinical roles of this family of peptides.
Optimised in vitro applicable loads for the simulation of lateral bending in the lumbar spine.
Dreischarf, Marcel; Rohlmann, Antonius; Bergmann, Georg; Zander, Thomas
2012-07-01
In in vitro studies of the lumbar spine simplified loading modes (compressive follower force, pure moment) are usually employed to simulate the standard load cases flexion-extension, axial rotation and lateral bending of the upper body. However, the magnitudes of these loads vary widely in the literature. Thus the results of current studies may lead to unrealistic values and are hardly comparable. It is still unknown which load magnitudes lead to a realistic simulation of maximum lateral bending. A validated finite element model of the lumbar spine was used in an optimisation study to determine which magnitudes of the compressive follower force and bending moment deliver results that fit best with averaged in vivo data. The best agreement with averaged in vivo measured data was found for a compressive follower force of 700 N and a lateral bending moment of 7.8 Nm. These results show that loading modes that differ strongly from the optimised one may not realistically simulate maximum lateral bending. The simplified but in vitro applicable loading cannot perfectly mimic the in vivo situation. However, the optimised magnitudes are those which agree best with averaged in vivo measured data. Its consequent application would lead to a better comparability of different investigations. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Optimisation of warpage on thin shell part by using particle swarm optimisation (PSO)
NASA Astrophysics Data System (ADS)
Norshahira, R.; Shayfull, Z.; Nasir, S. M.; Saad, S. M. Sazli; Fathullah, M.
2017-09-01
As the product nowadays moving towards thinner design, causing the production of the plastic product facing a lot of difficulties. This is due to the higher possibilities of defects occur as the thickness of the wall gets thinner. Demand for technique in reducing the defects increasing due to this factor. These defects has seen to be occur due to several factors in injection moulding process. In the study a Moldflow software was used in simulating the injection moulding process. While RSM is used in producing the mathematical model to be used as the input fitness function for the Matlab software. Particle Swarm Optimisation (PSO) technique is used in optimising the processing condition to reduce the amount of shrinkage and warpage of the plastic part. The results shows that there are a warpage reduction of 17.60% in x direction, 18.15% in y direction and 10.25% reduction in z direction respectively. The results shows the reliability of this artificial method in minimising the product warpage.
Hill, Holger
2015-01-01
In a case study, Schaffert and Mattes reported the application of acoustic feedback (sonification) to optimise the time course of boat acceleration. The authors attributed an increased boat speed in the feedback condition to an optimised boat acceleration (mainly during the recovery phase). However, in rowing it is biomechanically impossible to increase the boat speed significantly by reducing the fluctuations in boat acceleration during the rowing cycle. To assess such a, potentially small, optimising effect experimentally, the confounding variables must be controlled very accurately (that is especially the propulsive forces must be kept constant between experimental conditions or the differences in propulsive forces between conditions must be much smaller than the effects on boat speed resulting from an optimised movement pattern). However, this was not controlled adequately by the authors. Instead, the presented boat acceleration data show that the increased boat speed under acoustic feedback was due to increased propulsive forces.
Optimisation of composite bone plates for ulnar transverse fractures.
Chakladar, N D; Harper, L T; Parsons, A J
2016-04-01
Metallic bone plates are commonly used for arm bone fractures where conservative treatment (casts) cannot provide adequate support and compression at the fracture site. These plates, made of stainless steel or titanium alloys, tend to shield stress transfer at the fracture site and delay the bone healing rate. This study investigates the feasibility of adopting advanced composite materials to overcome stress shielding effects by optimising the geometry and mechanical properties of the plate to match more closely to the bone. An ulnar transverse fracture is characterised and finite element techniques are employed to investigate the feasibility of a composite-plated fractured bone construct over a stainless steel equivalent. Numerical models of intact and fractured bones are analysed and the mechanical behaviour is found to agree with experimental data. The mechanical properties are tailored to produce an optimised composite plate, offering a 25% reduction in length and a 70% reduction in mass. The optimised design may help to reduce stress shielding and increase bone healing rates. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine
2018-07-01
This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.
Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks
NASA Astrophysics Data System (ADS)
Yang, Chao; Fu, Yuli; Yang, Junjie
2016-07-01
Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.
Discovery and optimisation studies of antimalarial phenotypic hits
Mital, Alka; Murugesan, Dinakaran; Kaiser, Marcel; Yeates, Clive; Gilbert, Ian H.
2015-01-01
There is an urgent need for the development of new antimalarial compounds. As a result of a phenotypic screen, several compounds with potent activity against the parasite Plasmodium falciparum were identified. Characterization of these compounds is discussed, along with approaches to optimise the physicochemical properties. The in vitro antimalarial activity of these compounds against P. falciparum K1 had EC50 values in the range of 0.09–29 μM, and generally good selectivity (typically >100-fold) compared to a mammalian cell line (L6). One example showed no significant activity against a rodent model of malaria, and more work is needed to optimise these compounds. PMID:26408453
Design and optimisation of wheel-rail profiles for adhesion improvement
NASA Astrophysics Data System (ADS)
Liu, B.; Mei, T. X.; Bruni, S.
2016-03-01
This paper describes a study for the optimisation of the wheel profile in the wheel-rail system to increase the overall level of adhesion available at the contact interface, in particular to investigate how the wheel and rail profile combination may be designed to ensure the improved delivery of tractive/braking forces even in poor contact conditions. The research focuses on the geometric combination of both wheel and rail profiles to establish how the contact interface may be optimised to increase the adhesion level, but also to investigate how the change in the property of the contact mechanics at the wheel-rail interface may also lead to changes in the vehicle dynamic behaviour.
Improving Vector Evaluated Particle Swarm Optimisation Using Multiple Nondominated Leaders
Lim, Kian Sheng; Buyamin, Salinda; Ahmad, Anita; Shapiai, Mohd Ibrahim; Naim, Faradila; Mubin, Marizan; Kim, Dong Hwa
2014-01-01
The vector evaluated particle swarm optimisation (VEPSO) algorithm was previously improved by incorporating nondominated solutions for solving multiobjective optimisation problems. However, the obtained solutions did not converge close to the Pareto front and also did not distribute evenly over the Pareto front. Therefore, in this study, the concept of multiple nondominated leaders is incorporated to further improve the VEPSO algorithm. Hence, multiple nondominated solutions that are best at a respective objective function are used to guide particles in finding optimal solutions. The improved VEPSO is measured by the number of nondominated solutions found, generational distance, spread, and hypervolume. The results from the conducted experiments show that the proposed VEPSO significantly improved the existing VEPSO algorithms. PMID:24883386
Potentiality of Big Data in the Medical Sector: Focus on How to Reshape the Healthcare System
Jee, Kyoungyoung
2013-01-01
Objectives The main purpose of this study was to explore whether the use of big data can effectively reduce healthcare concerns, such as the selection of appropriate treatment paths, improvement of healthcare systems, and so on. Methods By providing an overview of the current state of big data applications in the healthcare environment, this study has explored the current challenges that governments and healthcare stakeholders are facing as well as the opportunities presented by big data. Results Insightful consideration of the current state of big data applications could help follower countries or healthcare stakeholders in their plans for deploying big data to resolve healthcare issues. The advantage for such follower countries and healthcare stakeholders is that they can possibly leapfrog the leaders' big data applications by conducting a careful analysis of the leaders' successes and failures and exploiting the expected future opportunities in mobile services. Conclusions First, all big data projects undertaken by leading countries' governments and healthcare industries have similar general common goals. Second, for medical data that cuts across departmental boundaries, a top-down approach is needed to effectively manage and integrate big data. Third, real-time analysis of in-motion big data should be carried out, while protecting privacy and security. PMID:23882412
Potentiality of big data in the medical sector: focus on how to reshape the healthcare system.
Jee, Kyoungyoung; Kim, Gang-Hoon
2013-06-01
The main purpose of this study was to explore whether the use of big data can effectively reduce healthcare concerns, such as the selection of appropriate treatment paths, improvement of healthcare systems, and so on. By providing an overview of the current state of big data applications in the healthcare environment, this study has explored the current challenges that governments and healthcare stakeholders are facing as well as the opportunities presented by big data. Insightful consideration of the current state of big data applications could help follower countries or healthcare stakeholders in their plans for deploying big data to resolve healthcare issues. The advantage for such follower countries and healthcare stakeholders is that they can possibly leapfrog the leaders' big data applications by conducting a careful analysis of the leaders' successes and failures and exploiting the expected future opportunities in mobile services. First, all big data projects undertaken by leading countries' governments and healthcare industries have similar general common goals. Second, for medical data that cuts across departmental boundaries, a top-down approach is needed to effectively manage and integrate big data. Third, real-time analysis of in-motion big data should be carried out, while protecting privacy and security.
Optimising fuel treatments over time and space
Woodam Chung; Greg Jones; Kurt Krueger; Jody Bramel; Marco Contreras
2013-01-01
Fuel treatments have been widely used as a tool to reduce catastrophic wildland fire risks in many forests around the world. However, it is a challenging task for forest managers to prioritise where, when and how to implement fuel treatments across a large forest landscape. In this study, an optimisation model was developed for long-term fuel management decisions at a...
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Almén, Anja; Båth, Magnus
2016-06-01
The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
ERIC Educational Resources Information Center
Jucovy, Linda; Herrera, Carla
2009-01-01
This issue of "Public/Private Ventures (P/PV) In Brief" is based on "High School Students as Mentors," a report that examined the efficacy of high school mentors using data from P/PV's large-scale random assignment impact study of Big Brothers Big Sisters school-based mentoring programs. The brief presents an overview of the findings, which…
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
Big Data and Biomedical Informatics: A Challenging Opportunity
2014-01-01
Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034
Big data and biomedical informatics: a challenging opportunity.
Bellazzi, R
2014-05-22
Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.
76 FR 64341 - Big Sandy Pipeline, LLC; Notice of Cost and Revenue Study
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-18
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. CP06-275-003] Big Sandy Pipeline, LLC; Notice of Cost and Revenue Study Take notice that on April 8, 2011, Big Sandy Pipeline, LLC filed its cost and revenue study in compliance with the Commission's November 15, 2006 Order Issuing...
ERIC Educational Resources Information Center
National Endowment for the Arts, 2009
2009-01-01
The Big Read evaluation included a series of 35 case studies designed to gather more in-depth information on the program's implementation and impact. The case studies gave readers a valuable first-hand look at The Big Read in context. Both formal and informal interviews, focus groups, attendance at a wide range of events--all showed how…
Linking of uniform random polygons in confined spaces
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.
2007-03-01
In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .
A Big Data Analytics Methodology Program in the Health Sector
ERIC Educational Resources Information Center
Lawler, James; Joseph, Anthony; Howell-Barber, H.
2016-01-01
The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…
Classical and quantum Big Brake cosmology for scalar field and tachyonic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamenshchik, A. Yu.; Manti, S.
We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study
2015-01-16
evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
2014-10-02
hadoop / Bradicich, T. & Orci, S. (2012). Moore’s Law of Big Data National Instruments Instrumentation News. December 2012...accurate and meaningful conclusions from such a large amount of data is a growing problem, and the term “ Big Data ” describes this phenomenon. Big Data ...is “ Big Data ”. 2. HISTORY OF BIG DATA The technology research firm International Data Corporation (IDC) recently performed a study on digital
Big data and clinicians: a review on the state of the science.
Wang, Weiqi; Krishnan, Eswar
2014-01-17
In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data.
Cahyaningrum, Fitrianna; Permadhi, Inge; Ansari, Muhammad Ridwan; Prafiantini, Erfi; Rachman, Purnawati Hustina; Agustina, Rina
2016-12-01
Diets with a specific omega-6/omega-3 fatty acid ratio have been reported to have favourable effects in controlling obesity in adults. However, development a local-based diet by considering the ratio of these fatty acids for improving the nutritional status of overweight and obese children is lacking. Therefore, using linear programming, we developed an affordable optimised diet focusing on the ratio of omega- 6/omega-3 fatty acid intake for obese children aged 12-23 months. A crosssectional study was conducted in two subdistricts of East Jakarta involving 42 normal-weight and 29 overweight and obese children, grouped on the basis of their body mass index for-age Z scores and selected through multistage random sampling. A 24-h recall was performed for 3-nonconsecutive days to assess the children's dietary intake levels and food patterns. We conducted group and structured interviews as well as market surveys to identify food availability, accessibility and affordability. Three types of affordable optimised 7-day diet meal plans were developed on the basis of breastfeeding status. The optimised diet plan fulfilled energy and macronutrient intake requirements within the acceptable macronutrient distribution range. The omega-6/omega-3 fatty acid ratio in the children was between 4 and 10. Moreover, the micronutrient intake level was within the range of the recommended daily allowance or estimated average recommendation and tolerable upper intake level. The optimisation model used in this study provides a mathematical solution for economical diet meal plans that approximate the nutrient requirements for overweight and obese children.
Miyauchi, Yumi; Sakai, Satoshi; Maeda, Seiji; Shimojo, Nobutake; Watanabe, Shigeyuki; Honma, Satoshi; Kuga, Keisuke; Aonuma, Kazutaka; Miyauchi, Takashi
2012-10-15
Big endothelins (pro-endothelin; inactive-precursor) are converted to biologically active endothelins (ETs). Mammals and humans produce three ET family members: ET-1, ET-2 and ET-3, from three different genes. Although ET-1 is produced by vascular endothelial cells, these cells do not produce ET-3, which is produced by neuronal cells and organs such as the thyroid, salivary gland and the kidney. In patients with end-stage renal disease, abnormal vascular endothelial cell function and elevated plasma ET-1 and big ET-1 levels have been reported. It is unknown whether big ET-2 and big ET-3 plasma levels are altered in these patients. The purpose of the present study was to determine whether endogenous ET-1, ET-2, and ET-3 systems including big ETs are altered in patients with end-stage renal disease. We measured plasma levels of ET-1, ET-3 and big ET-1, big ET-2, and big ET-3 in patients on chronic hemodialysis (n=23) and age-matched healthy subjects (n=17). In patients on hemodialysis, plasma levels (measured just before hemodialysis) of both ET-1 and ET-3 and big ET-1, big ET-2, and big ET-3 were markedly elevated, and the increase was higher for big ETs (Big ET-1, 4-fold; big ET-2, 6-fold; big ET-3: 5-fold) than for ETs (ET-1, 1.7-fold; ET-3, 2-fold). In hemodialysis patients, plasma levels of the inactive precursors big ET-1, big ET-2, and big ET-3 levels are markedly increased, yet there is only a moderate increase in plasma levels of the active products, ET-1 and ET-3. This suggests that the activity of endothelin converting enzyme contributing to circulating levels of ET-1 and ET-3 may be decreased in patients on chronic hemodialysis. Copyright © 2012 Elsevier Inc. All rights reserved.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
Big Data Analytics Methodology in the Financial Industry
ERIC Educational Resources Information Center
Lawler, James; Joseph, Anthony
2017-01-01
Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
Recurrent personality dimensions in inclusive lexical studies: indications for a big six structure.
Saucier, Gerard
2009-10-01
Previous evidence for both the Big Five and the alternative six-factor model has been drawn from lexical studies with relatively narrow selections of attributes. This study examined factors from previous lexical studies using a wider selection of attributes in 7 languages (Chinese, English, Filipino, Greek, Hebrew, Spanish, and Turkish) and found 6 recurrent factors, each with common conceptual content across most of the studies. The previous narrow-selection-based six-factor model outperformed the Big Five in capturing the content of the 6 recurrent wideband factors. Adjective markers of the 6 recurrent wideband factors showed substantial incremental prediction of important criterion variables over and above the Big Five. Correspondence between wideband 6 and narrowband 6 factors indicate they are variants of a "Big Six" model that is more general across variable-selection procedures and may be more general across languages and populations.
The eastern states exposition: an exploration of Big E tourist expenditures
Robert S. Bristow; Heather Cantillon
2001-01-01
The purpose of this paper is to prepare a visitor economic expenditure study for the 1999 Eastern States Exposition, better known as the Big E. The study was executed as part of a class project in Recreation Geography offered the Fall 1999 semester at Westfield State College. The students undertook an economic expenditure study at the Big E by studying tourism...
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-01-01
Purpose: The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Materials and Methods: Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 23 full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Results: Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug–excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. Conclusion: It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease. PMID:23580933
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist.
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-10-01
The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 2(3) full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug-excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease.
Dalvadi, Hitesh; Patel, Nikita; Parmar, Komal
2017-05-01
The aim of present investigation is to improve dissolution rate of poor soluble drug Zotepine by a self-microemulsifying drug delivery system (SMEDDS). Ternary phase diagram with oil (Oleic acid), surfactant (Tween 80) and co-surfactant (PEG 400) at apex were used to identify the efficient self-microemulsifying region. Box-Behnken design was implemented to study the influence of independent variables. Principal Component Analysis was used for scrutinising critical variables. The liquid SMEDDS were characterised for macroscopic evaluation, % Transmission, emulsification time and in vitro drug release studies. Optimised formulation OL1 was converted in to S-SMEDDS by using Aerosil ® 200 as an adsorbent in the ratio of 3:1. The S-SMEDDS was characterised by SEM, DSC, globule size (152.1 nm), zeta-potential (-28.1 mV), % transmission study (98.75%), in vitro release (86.57%) at 30 min. The optimised solid SMEDDS formulation showed faster drug release properties as compared to conventional tablet of Zotepine.
Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar
NASA Astrophysics Data System (ADS)
Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd
2017-05-01
Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.
Evaluation of a High Throughput Starch Analysis Optimised for Wood
Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco
2014-01-01
Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics. PMID:26295151
Multiobjective optimisation of bogie suspension to boost speed on curves
NASA Astrophysics Data System (ADS)
Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor
2016-01-01
To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.
Ebersbach, Georg; Grust, Ute; Ebersbach, Almut; Wegner, Brigitte; Gandor, Florin; Kühn, Andrea A
2015-02-01
LSVT-BIG is an exercise for patients with Parkinson's disease (PD) comprising of 16 1-h sessions within 4 weeks. LSVT-BIG was compared with a 2-week short protocol (AOT-SP) consisting of 10 sessions with identical exercises in 42 patients with PD. UPDRS-III-score was reduced by -6.6 in LSVT-BIG and -5.7 in AOT-SP at follow-up after 16 weeks (p < 0.001). Measures of motor performance were equally improved by LSVT-BIG and AOT-SP but high-intensity LSVT-BIG was more effective to obtain patient-perceived benefit.
Xiaoli Sun; Wengang Li; Jian Li; Yuangang Zu; Chung-Yun Hse; Jiulong Xie; Xiuhua Zhao
2016-01-01
Ethanol and hexane mixture agent microwave-assisted extraction (MAE) method was conducted to extract peony (Paeonia suffruticosa Andr.) seed oil (PSO). The aim of the study was to optimise the extraction for both yield and energy consumption in mixture agent MAE. The highest oil yield (34.49%) and lowest unit energy consumption (14 125.4 J g -1)...
Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.
Kramar, A; Turk, S; Vrecer, F
2003-04-30
The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.
Big data science: A literature review of nursing research exemplars.
Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W
Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.
Analysis of some seismic expressions of Big Injun sandstone and its adjacent interval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiangdong, Zou; Wilson, T.A.; Donaldson, A.C.
1991-08-01
The Big Injun sandstone is an important oil and gas reservoir in western West Virginia. The pre-Greenbrier unconformity has complicated correlations, and hydrocarbon explorationists commonly have misidentified the Big Injun in the absence of a regional stratigraphic study. Paleogeologic maps on this unconformity show the West Virginia dome, with the Price/Pocono units truncated resulting in pinch-outs of different sandstones against the overlying Big Lime (Greenbrier Limestone). Drillers have named the first sandstone below the Big Lime as Big Injun, and miscorrelated the real Big Injun with Squaw, upper Weir, and even the Berea sandstone. In this report, an 8-mi (13-km)more » seismic section extending from Kanawha to Clay counties was interpreted. The study area is near the pinch-out of the Big Injun sandstone. A stratigraphic cross section was constructed from gamma-ray logs for comparison with the seismic interpretation. The modeling and interpretation of the seismic section recognized the relief on the unconformity and the ability to determine facies changes, too. Both geophysical wireline and seismic data can be used for detailed stratigraphic analysis within the Granny Creek oil field of Clay and Roane countries.« less
Abu, Mary Ladidi; Nooh, Hisham Mohd; Oslan, Siti Nurbaya; Salleh, Abu Bakar
2017-11-10
Pichia guilliermondii was found capable of expressing the recombinant thermostable lipase without methanol under the control of methanol dependent alcohol oxidase 1 promoter (AOXp 1). In this study, statistical approaches were employed for the screening and optimisation of physical conditions for T1 lipase production in P. guilliermondii. The screening of six physical conditions by Plackett-Burman Design has identified pH, inoculum size and incubation time as exerting significant effects on lipase production. These three conditions were further optimised using, Box-Behnken Design of Response Surface Methodology, which predicted an optimum medium comprising pH 6, 24 h incubation time and 2% inoculum size. T1 lipase activity of 2.0 U/mL was produced with a biomass of OD 600 23.0. The process of using RSM for optimisation yielded a 3-fold increase of T1 lipase over medium before optimisation. Therefore, this result has proven that T1 lipase can be produced at a higher yield in P. guilliermondii.
NASA Astrophysics Data System (ADS)
Kies, Alexander
2018-02-01
To meet European decarbonisation targets by 2050, the electrification of the transport sector is mandatory. Most electric vehicles rely on lithium-ion batteries, because they have a higher energy/power density and longer life span compared to other practical batteries such as zinc-carbon batteries. Electric vehicles can thus provide energy storage to support the system integration of generation from highly variable renewable sources, such as wind and photovoltaics (PV). However, charging/discharging causes batteries to degradate progressively with reduced capacity. In this study, we investigate the impact of the joint optimisation of arbitrage revenue and battery degradation of electric vehicle batteries in a simplified setting, where historical prices allow for market participation of battery electric vehicle owners. It is shown that the joint optimisation of both leads to stronger gains then the sum of both optimisation strategies and that including battery degradation into the model avoids state of charges close to the maximum at times. It can be concluded that degradation is an important aspect to consider in power system models, which incorporate any kind of lithium-ion battery storage.
Big Five personality traits, job satisfaction and subjective wellbeing in China.
Zhai, Qingguo; Willis, Mike; O'Shea, Bob; Zhai, Yubo; Yang, Yuwen
2013-01-01
This paper examines the effect of the Big Five personality traits on job satisfaction and subjective wellbeing (SWB). The paper also examines the mediating role of job satisfaction on the Big Five-SWB relationship. Data were collected from a sample of 818 urban employees from five Chinese cities: Harbin, Changchun, Shenyang, Dalian, and Fushun. All the study variables were measured with well-established multi-item scales that have been validated both in English-speaking populations and in China. The study found only extraversion to have an effect on job satisfaction, suggesting that there could be cultural difference in the relationships between the Big Five and job satisfaction in China and in the West. The study found that three factors in the Big Five--extraversion, conscientiousness, and neuroticism--have an effect on SWB. This finding is similar to findings in the West, suggesting convergence in the relationship between the Big Five and SWB in different cultural contexts. The research found that only the relationship between extraversion and SWB is partially mediated by job satisfaction, implying that the effect of the Big Five on SWB is mainly direct, rather than indirect via job satisfaction. The study also found that extraversion was the strongest predictor of both job satisfaction and SWB. This finding implies that extraversion could be more important than other factors in the Big Five in predicting job satisfaction and SWB in a "high collectivism" and "high power distance" country such as China. The research findings are discussed in the Chinese cultural context. The study also offers suggestions on the directions for future research.
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
Preece, Stephen J; Chapman, Jonathan D; Braunstein, Bjoern; Brüggemann, Gert-Peter; Nester, Christopher J
2017-01-01
Appropriate footwear for individuals with diabetes but no ulceration history could reduce the risk of first ulceration. However, individuals who deem themselves at low risk are unlikely to seek out bespoke footwear which is personalised. Therefore, our primary aim was to investigate whether group-optimised footwear designs, which could be prefabricated and delivered in a retail setting, could achieve appropriate pressure reduction, or whether footwear selection must be on a patient-by-patient basis. A second aim was to compare responses to footwear design between healthy participants and people with diabetes in order to understand the transferability of previous footwear research, performed in healthy populations. Plantar pressures were recorded from 102 individuals with diabetes, considered at low risk of ulceration. This cohort included 17 individuals with peripheral neuropathy. We also collected data from 66 healthy controls. Each participant walked in 8 rocker shoe designs (4 apex positions × 2 rocker angles). ANOVA analysis was then used to understand the effect of two design features and descriptive statistics used to identify the group-optimised design. Using 200 kPa as a target, this group-optimised design was then compared to the design identified as the best for each participant (using plantar pressure data). Peak plantar pressure increased significantly as apex position was moved distally and rocker angle reduced ( p < 0.001). The group-optimised design incorporated an apex at 52% of shoe length, a 20° rocker angle and an apex angle of 95°. With this design 71-81% of peak pressures were below the 200 kPa threshold, both in the full cohort of individuals with diabetes and also in the neuropathic subgroup. Importantly, only small increases (<5%) in this proportion were observed when participants wore footwear which was individually selected. In terms of optimised footwear designs, healthy participants demonstrated the same response as participants with diabetes, despite having lower plantar pressures. This is the first study demonstrating that a group-optimised, generic rocker shoe might perform almost as well as footwear selected on a patient by patient basis in a low risk patient group. This work provides a starting point for clinical evaluation of generic versus personalised pressure reducing footwear.
The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness.
Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen
2016-10-17
The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term "Big Data", which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing.
Big Data and Clinicians: A Review on the State of the Science
Wang, Weiqi
2014-01-01
Background In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. Objective The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. Methods We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. Results This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Conclusions Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data. PMID:25600256
The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness
Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen
2016-01-01
The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term “Big Data”, which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing. PMID:27763525
Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M
2018-01-01
Objectives To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Design Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Results Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more ‘real’ RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. Conclusion There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. PMID:29420230
Rowlands, Ceri; Rooshenas, Leila; Fairhurst, Katherine; Rees, Jonathan; Gamble, Carrol; Blazeby, Jane M
2018-02-02
To examine the design and findings of recruitment studies in randomised controlled trials (RCTs) involving patients with an unscheduled hospital admission (UHA), to consider how to optimise recruitment in future RCTs of this nature. Studies within the ORRCA database (Online Resource for Recruitment Research in Clinical TriAls; www.orrca.org.uk) that reported on recruitment to RCTs involving UHAs in patients >18 years were included. Extracted data included trial clinical details, and the rationale and main findings of the recruitment study. Of 3114 articles populating ORRCA, 39 recruitment studies were eligible, focusing on 68 real and 13 hypothetical host RCTs. Four studies were prospectively planned investigations of recruitment interventions, one of which was a nested RCT. Most recruitment papers were reports of recruitment experiences from one or more 'real' RCTs (n=24) or studies using hypothetical RCTs (n=11). Rationales for conducting recruitment studies included limited time for informed consent (IC) and patients being too unwell to provide IC. Methods to optimise recruitment included providing patients with trial information in the prehospital setting, technology to allow recruiters to cover multiple sites, screening logs to uncover recruitment barriers, and verbal rather than written information and consent. There is a paucity of high-quality research into recruitment in RCTs involving UHAs with only one nested randomised study evaluating a recruitment intervention. Among the remaining studies, methods to optimise recruitment focused on how to improve information provision in the prehospital setting and use of screening logs. Future research in this setting should focus on the prospective evaluation of the well-developed interventions to optimise recruitment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
NASA Astrophysics Data System (ADS)
Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham
2016-11-01
New requirements to understand geological properties in three dimensions have led to the development of PropBase, a data structure and delivery tools to deliver this. At the BGS, relational database management systems (RDBMS) has facilitated effective data management using normalised subject-based database designs with business rules in a centralised, vocabulary controlled, architecture. These have delivered effective data storage in a secure environment. However, isolated subject-oriented designs prevented efficient cross-domain querying of datasets. Additionally, the tools provided often did not enable effective data discovery as they struggled to resolve the complex underlying normalised structures providing poor data access speeds. Users developed bespoke access tools to structures they did not fully understand sometimes delivering them incorrect results. Therefore, BGS has developed PropBase, a generic denormalised data structure within an RDBMS to store property data, to facilitate rapid and standardised data discovery and access, incorporating 2D and 3D physical and chemical property data, with associated metadata. This includes scripts to populate and synchronise the layer with its data sources through structured input and transcription standards. A core component of the architecture includes, an optimised query object, to deliver geoscience information from a structure equivalent to a data warehouse. This enables optimised query performance to deliver data in multiple standardised formats using a web discovery tool. Semantic interoperability is enforced through vocabularies combined from all data sources facilitating searching of related terms. PropBase holds 28.1 million spatially enabled property data points from 10 source databases incorporating over 50 property data types with a vocabulary set that includes 557 property terms. By enabling property data searches across multiple databases PropBase has facilitated new scientific research, previously considered impractical. PropBase is easily extended to incorporate 4D data (time series) and is providing a baseline for new "big data" monitoring projects.
ERIC Educational Resources Information Center
Preckel, Franzis; Zeidner, Moshe; Goetz, Thomas; Schleyer, Esther Jane
2008-01-01
This study takes a second look at the "big-fish-little-pond effect" (BFLPE) on a national sample of 769 gifted Israeli students (32% female) previously investigated by Zeidner and Schleyer (Zeidner, M., & Schleyer, E. J., (1999a). "The big-fish-little-pond effect for academic self-concept, test anxiety, and school grades in…
Are Behavior Problems in Preschool Children Related to Big-Five Markers.
ERIC Educational Resources Information Center
Martin, Roy P.
This study investigated whether the "Big-5" structure (a 5-factor model used to capture variance in adult personality) can be obtained from parental ratings of 4-year-old children using traditional markers of this structure that are derived primarily from research on adult personality. The study also examined whether Big-5 markers can be…
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
Mekuto, Lukhanyo; Ntwampe, Seteno Karabo Obed; Jackson, Vanessa Angela
2015-07-01
A mesophilic alkali-tolerant bacterial consortium belonging to the Bacillus genus was evaluated for its ability to biodegrade high free cyanide (CN(-)) concentration (up to 500 mg CN(-)/L), subsequent to the oxidation of the formed ammonium and nitrates in a continuous bioreactor system solely supplemented with whey waste. Furthermore, an optimisation study for successful cyanide biodegradation by this consortium was evaluated in batch bioreactors (BBs) using response surface methodology (RSM). The input variables, that is, pH, temperature and whey-waste concentration, were optimised using a numerical optimisation technique where the optimum conditions were found to be as follows: pH 9.88, temperature 33.60 °C and whey-waste concentration of 14.27 g/L, under which 206.53 mg CN(-)/L in 96 h can be biodegraded by the microbial species from an initial cyanide concentration of 500 mg CN(-)/L. Furthermore, using the optimised data, cyanide biodegradation in a continuous mode was evaluated in a dual-stage packed-bed bioreactor (PBB) connected in series to a pneumatic bioreactor system (PBS) used for simultaneous nitrification, including aerobic denitrification. The whey-supported Bacillus sp. culture was not inhibited by the free cyanide concentration of up to 500 mg CN(-)/L, with an overall degradation efficiency of ≥ 99 % with subsequent nitrification and aerobic denitrification of the formed ammonium and nitrates over a period of 80 days. This is the first study to report free cyanide biodegradation at concentrations of up to 500 mg CN(-)/L in a continuous system using whey waste as a microbial feedstock. The results showed that the process has the potential for the bioremediation of cyanide-containing wastewaters.
Acoustic Resonator Optimisation for Airborne Particle Manipulation
NASA Astrophysics Data System (ADS)
Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian
Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.
Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.
Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie
2011-12-01
A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.
Chapman, Benjamin P; Elliot, Ari J
2017-08-01
Controversy exists over the use of brief Big Five scales in health studies. We investigated links between an ultra-brief measure, the Big Five Inventory-10, and mortality in the General Social Survey. The Agreeableness scale was associated with elevated mortality risk (hazard ratio = 1.26, p = .017). This effect was attributable to the reversed-scored item "Tends to find fault with others," so that greater fault-finding predicted lower mortality risk. The Conscientiousness scale approached meta-analytic estimates, which were not precise enough for significance. Those seeking Big Five measurement in health studies should be aware that the Big Five Inventory-10 may yield unusual results.
Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach
Cheung, Mike W.-L.; Jak, Suzanne
2016-01-01
Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists—and probably the most crucial one—is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study. PMID:27242639
Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.
Cheung, Mike W-L; Jak, Suzanne
2016-01-01
Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.
A Study Of The Internet Of Things And Rfid Technology: Big Data In Navy Medicine
2017-12-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT A STUDY OF THE INTERNET OF THINGS AND RFID TECHNOLOGY...December 2017 3. REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE A STUDY OF THE INTERNET OF THINGS AND RFID TECHNOLOGY: BIG...Distribution is unlimited. A STUDY OF THE INTERNET OF THINGS AND RFID TECHNOLOGY: BIG DATA IN NAVY MEDICINE Gill S. Trainor, Lieutenant
Gergei, Ingrid; Krämer, Bernhard K; Scharnagl, Hubert; Stojakovic, Tatjana; März, Winfried; Mondorf, Ulrich
The endothelin system (Big-ET-1) is a key regulator in cardiovascular (CV) disease and congestive heart failure (CHF). We have examined the incremental value of Big-ET-1 in predicting total and CV mortality next to the well-established CV risk marker N-Terminal Pro-B-Type Natriuretic Peptide (NT-proBNP). Big-ET-1 and NT-proBNP were determined in 2829 participants referred for coronary angiography (follow-up 9.9 years). Big-ET-1 is an independent predictor of total, CV mortality and death due to CHF. The conjunct use of Big-ET-1 and NT-proBNP improves the risk stratification of patients with intermediate to high risk of CV death and CHF. Big-ET-1improves risk stratification in patients referred for coronary angiography.
Personality and job performance: the Big Five revisited.
Hurtz, G M; Donovan, J J
2000-12-01
Prior meta-analyses investigating the relation between the Big 5 personality dimensions and job performance have all contained a threat to construct validity, in that much of the data included within these analyses was not derived from actual Big 5 measures. In addition, these reviews did not address the relations between the Big 5 and contextual performance. Therefore, the present study sought to provide a meta-analytic estimate of the criterion-related validity of explicit Big 5 measures for predicting job performance and contextual performance. The results for job performance closely paralleled 2 of the previous meta-analyses, whereas analyses with contextual performance showed more complex relations among the Big 5 and performance. A more critical interpretation of the Big 5-performance relationship is presented, and suggestions for future research aimed at enhancing the validity of personality predictors are provided.
A proposed framework of big data readiness in public sectors
NASA Astrophysics Data System (ADS)
Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz
2016-08-01
Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.
The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.
2015-12-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the datasets distribution - and will include intercomparison, ensemble, and outlier analysis. The two-level workflow solution envisioned in INDIGO (coarse grain for distributed tasks orchestration, and fine grain, at the level of a single data analytics cluster instance) will be presented and discussed.
Macedonia, Christian R; Johnson, Clark T; Rajapakse, Indika
2017-02-01
Technical advances in science have had broad implications in reproductive and women's health care. Recent innovations in population-level data collection and storage have made available an unprecedented amount of data for analysis while computational technology has evolved to permit processing of data previously thought too dense to study. "Big data" is a term used to describe data that are a combination of dramatically greater volume, complexity, and scale. The number of variables in typical big data research can readily be in the thousands, challenging the limits of traditional research methodologies. Regardless of what it is called, advanced data methods, predictive analytics, or big data, this unprecedented revolution in scientific exploration has the potential to dramatically assist research in obstetrics and gynecology broadly across subject matter. Before implementation of big data research methodologies, however, potential researchers and reviewers should be aware of strengths, strategies, study design methods, and potential pitfalls. Examination of big data research examples contained in this article provides insight into the potential and the limitations of this data science revolution and practical pathways for its useful implementation.
Kievit, Wietske; van Herwaarden, Noortje; van den Hoogen, Frank Hj; van Vollenhoven, Ronald F; Bijlsma, Johannes Wj; van den Bemt, Bart Jf; van der Maas, Aatke; den Broeder, Alfons A
2016-11-01
A disease activity-guided dose optimisation strategy of adalimumab or etanercept (TNFi (tumour necrosis factor inhibitors)) has shown to be non-inferior in maintaining disease control in patients with rheumatoid arthritis (RA) compared with usual care. However, the cost-effectiveness of this strategy is still unknown. This is a preplanned cost-effectiveness analysis of the Dose REduction Strategy of Subcutaneous TNF inhibitors (DRESS) study, a randomised controlled, open-label, non-inferiority trial performed in two Dutch rheumatology outpatient clinics. Patients with low disease activity using TNF inhibitors were included. Total healthcare costs were measured and quality adjusted life years (QALY) were based on EQ5D utility scores. Decremental cost-effectiveness analyses were performed using bootstrap analyses; incremental net monetary benefit (iNMB) was used to express cost-effectiveness. 180 patients were included, and 121 were allocated to the dose optimisation strategy and 59 to control. The dose optimisation strategy resulted in a mean cost saving of -€12 280 (95 percentile -€10 502; -€14 104) per patient per 18 months. There is an 84% chance that the dose optimisation strategy results in a QALY loss with a mean QALY loss of -0.02 (-0.07 to 0.02). The decremental cost-effectiveness ratio (DCER) was €390 493 (€5 085 184; dominant) of savings per QALY lost. The mean iNMB was €10 467 (€6553-€14 037). Sensitivity analyses using 30% and 50% lower prices for TNFi remained cost-effective. Disease activity-guided dose optimisation of TNFi results in considerable cost savings while no relevant loss of quality of life was observed. When the minimal QALY loss is compensated with the upper limit of what society is willing to pay or accept in the Netherlands, the net savings are still high. NTR3216; Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank
2013-12-01
Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.
Big Data in Health: a Literature Review from the Year 2005.
de la Torre Díez, Isabel; Cosgaya, Héctor Merino; Garcia-Zapirain, Begoña; López-Coronado, Miguel
2016-09-01
The information stored in healthcare systems has increased over the last ten years, leading it to be considered Big Data. There is a wealth of health information ready to be analysed. However, the sheer volume raises a challenge for traditional methods. The aim of this article is to conduct a cutting-edge study on Big Data in healthcare from 2005 to the present. This literature review will help researchers to know how Big Data has developed in the health industry and open up new avenues for research. Information searches have been made on various scientific databases such as Pubmed, Science Direct, Scopus and Web of Science for Big Data in healthcare. The search criteria were "Big Data" and "health" with a date range from 2005 to the present. A total of 9724 articles were found on the databases. 9515 articles were discarded as duplicates or for not having a title of interest to the study. 209 articles were read, with the resulting decision that 46 were useful for this study. 52.6 % of the articles used were found in Science Direct, 23.7 % in Pubmed, 22.1 % through Scopus and the remaining 2.6 % through the Web of Science. Big Data has undergone extremely high growth since 2011 and its use is becoming compulsory in developed nations and in an increasing number of developing nations. Big Data is a step forward and a cost reducer for public and private healthcare.
Zhou, C; Li, C; Li, D; Wang, Y; Shao, W; You, Y; Peng, J; Zhang, X; Lu, L; Shen, X
2013-12-19
The elongation of neuron is highly dependent on membrane trafficking. Brefeldin A (BFA)-inhibited guanine nucleotide-exchange protein 1 (BIG1) functions in the membrane trafficking between the Golgi apparatus and the plasma membrane. BFA, an uncompetitive inhibitor of BIG1 can inhibit neurite outgrowth and polarity development. In this study, we aimed to define the possible role of BIG1 in neurite development and to further investigate the potential mechanism. By immunostaining, we found that BIG1 was extensively colocalized with synaptophysin, a marker for synaptic vesicles in soma and partly in neurites. The amount of both protein and mRNA of BIG1 were up-regulated during rat brain development. BIG1 depletion significantly decreased the neurite length and inhibited the phosphorylation of phosphatidylinositide 3-kinase (PI3K) and protein kinase B (AKT). Inhibition of BIG1 guanine nucleotide-exchange factor (GEF) activity by BFA or overexpression of the dominant-negative BIG1 reduced PI3K and AKT phosphorylation, indicating regulatory effects of BIG1 on PI3K-AKT signaling pathway is dependent on its GEF activity. BIG1 siRNA or BFA treatment also significantly reduced extracellular signal-regulated kinase (ERK) phosphorylation. Overexpression of wild-type BIG1 significantly increased ERK phosphorylation, but the dominant-negative BIG1 had no effect on ERK phosphorylation, indicating the involvement of BIG1 in ERK signaling regulation may not be dependent on its GEF activity. Our result identified a novel function of BIG1 in neurite development. The newly recognized function integrates the function of BIG1 in membrane trafficking with the activation of PI3K-AKT and ERK signaling pathways which are critical in neurite development. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan
2017-06-01
Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.
Rosa, Rafael D; Santini, Adrien; Fievet, Julie; Bulet, Philippe; Destoumieux-Garzón, Delphine; Bachère, Evelyne
2011-01-01
Big defensin is an antimicrobial peptide composed of a highly hydrophobic N-terminal region and a cationic C-terminal region containing six cysteine residues involved in three internal disulfide bridges. While big defensin sequences have been reported in various mollusk species, few studies have been devoted to their sequence diversity, gene organization and their expression in response to microbial infections. Using the high-throughput Digital Gene Expression approach, we have identified in Crassostrea gigas oysters several sequences coding for big defensins induced in response to a Vibrio infection. We showed that the oyster big defensin family is composed of three members (named Cg-BigDef1, Cg-BigDef2 and Cg-BigDef3) that are encoded by distinct genomic sequences. All Cg-BigDefs contain a hydrophobic N-terminal domain and a cationic C-terminal domain that resembles vertebrate β-defensins. Both domains are encoded by separate exons. We found that big defensins form a group predominantly present in mollusks and closer to vertebrate defensins than to invertebrate and fungi CSαβ-containing defensins. Moreover, we showed that Cg-BigDefs are expressed in oyster hemocytes only and follow different patterns of gene expression. While Cg-BigDef3 is non-regulated, both Cg-BigDef1 and Cg-BigDef2 transcripts are strongly induced in response to bacterial challenge. Induction was dependent on pathogen associated molecular patterns but not damage-dependent. The inducibility of Cg-BigDef1 was confirmed by HPLC and mass spectrometry, since ions with a molecular mass compatible with mature Cg-BigDef1 (10.7 kDa) were present in immune-challenged oysters only. From our biochemical data, native Cg-BigDef1 would result from the elimination of a prepropeptide sequence and the cyclization of the resulting N-terminal glutamine residue into a pyroglutamic acid. We provide here the first report showing that big defensins form a family of antimicrobial peptides diverse not only in terms of sequences but also in terms of genomic organization and regulation of gene expression.
Rosa, Rafael D.; Santini, Adrien; Fievet, Julie; Bulet, Philippe; Destoumieux-Garzón, Delphine; Bachère, Evelyne
2011-01-01
Background Big defensin is an antimicrobial peptide composed of a highly hydrophobic N-terminal region and a cationic C-terminal region containing six cysteine residues involved in three internal disulfide bridges. While big defensin sequences have been reported in various mollusk species, few studies have been devoted to their sequence diversity, gene organization and their expression in response to microbial infections. Findings Using the high-throughput Digital Gene Expression approach, we have identified in Crassostrea gigas oysters several sequences coding for big defensins induced in response to a Vibrio infection. We showed that the oyster big defensin family is composed of three members (named Cg-BigDef1, Cg-BigDef2 and Cg-BigDef3) that are encoded by distinct genomic sequences. All Cg-BigDefs contain a hydrophobic N-terminal domain and a cationic C-terminal domain that resembles vertebrate β-defensins. Both domains are encoded by separate exons. We found that big defensins form a group predominantly present in mollusks and closer to vertebrate defensins than to invertebrate and fungi CSαβ-containing defensins. Moreover, we showed that Cg-BigDefs are expressed in oyster hemocytes only and follow different patterns of gene expression. While Cg-BigDef3 is non-regulated, both Cg-BigDef1 and Cg-BigDef2 transcripts are strongly induced in response to bacterial challenge. Induction was dependent on pathogen associated molecular patterns but not damage-dependent. The inducibility of Cg-BigDef1 was confirmed by HPLC and mass spectrometry, since ions with a molecular mass compatible with mature Cg-BigDef1 (10.7 kDa) were present in immune-challenged oysters only. From our biochemical data, native Cg-BigDef1 would result from the elimination of a prepropeptide sequence and the cyclization of the resulting N-terminal glutamine residue into a pyroglutamic acid. Conclusions We provide here the first report showing that big defensins form a family of antimicrobial peptides diverse not only in terms of sequences but also in terms of genomic organization and regulation of gene expression. PMID:21980497
The effect of big endothelin-1 in the proximal tubule of the rat kidney
Beara-Lasić, Lada; Knotek, Mladen; Čejvan, Kenan; Jakšić, Ozren; Lasić, Zoran; Skorić, Boško; Brkljačić, Vera; Banfić, Hrvoje
1997-01-01
An obligatory step in the biosynthesis of endothelin-1 (ET-1) is the conversion of its inactive precursor, big ET-1, into the mature form by the action of specific, phosphoramidon-sensitive, endothelin converting enzyme(s) (ECE). Disparate effects of big ET-1 and ET-1 on renal tubule function suggest that big ET-1 might directly influence renal tubule function. Therefore, the role of the enzymatic conversion of big ET-1 into ET-1 in eliciting the functional response (generation of 1,2-diacylglycerol) to big ET-1 was studied in the rat proximal tubules.In renal cortical slices incubated with big ET-1, pretreatment with phosphoramidon (an ECE inhibitor) reduced tissue immunoreactive ET-1 to a level similar to that of cortical tissue not exposed to big ET-1. This confirms the presence and effectiveness of ECE inhibition by phosphoramidon.In freshly isolated proximal tubule cells, big ET-1 stimulated the generation of 1,2-diacylglycerol (DAG) in a time- and dose-dependent manner. Neither phosphoramidon nor chymostatin, a chymase inhibitor, influenced the generation of DAG evoked by big ET-1.Big ET-1-dependent synthesis of DAG was found in the brush-border membrane. It was unaffected by BQ123, an ETA receptor antagonist, but was blocked by bosentan, an ETA,B-nonselective endothelin receptor antagonist.These results suggest that the proximal tubule is a site for the direct effect of big ET-1 in the rat kidney. The effect of big ET-1 is confined to the brush-border membrane of the proximal tubule, which may be the site of big ET-1-sensitive receptors. PMID:9051300
Walker, Mirella; Vetter, Thomas
2016-04-01
General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Value of Big Endothelin-1 in the Assessment of the Severity of Coronary Artery Calcification.
Wang, Fang; Li, Tiewei; Cong, Xiangfeng; Hou, Zhihui; Lu, Bin; Zhou, Zhou; Chen, Xi
2018-01-01
Progression of coronary artery calcification (CAC) was significantly associated with all-cause mortality, and high coronary artery calcium score (CACS) portends a particularly high risk of cardiovascular events. But how often one should rescan is still an unanswered question. Preliminary screening by testing circulating biomarker may be an alternative before repeat computed tomography (CT) scan. The aim of this study was to investigate the value of big endothelin-1 (bigET-1), the precursor of endothelin-1 (ET-1), in predicting the severity of CAC. A total of 428 consecutively patients who performed coronary computed tomography angiography (CCTA) due to chest pain in Fuwai Hospital were included in the study. The clinical characteristics, CACS, and laboratory data were collected, and plasma bigET-1 was detected by enzyme-linked immunosorbent assay (ELISA). The bigET-1 was positively correlated with the CACS ( r = .232, P < .001), and the prevalence of CACS >400 increased significantly in the highest bigET-1 tertile than the lowest tertile. Multivariate analysis showed that bigET-1was the independent predictor of the presence of CACS >400 (odds ratio [OR] = 1.721, 95% confidence interval [CI], 1.002-2.956, P = .049). The receiver operating characteristic (ROC) curve analysis showed that the optimal cutoff value of bigET-1 for predicting CACS >400 was 0.38 pmol/L, with a sensitivity of 59% and specificity of 68% (area under curve [AUC] = 0.65, 95% CI, 0.58-0.72, P < .001). The present study demonstrated that the circulating bigET-1 was valuable in the assessment of the severity of CAC.
Association of Big Endothelin-1 with Coronary Artery Calcification.
Qing, Ping; Li, Xiao-Lin; Zhang, Yan; Li, Yi-Lin; Xu, Rui-Xia; Guo, Yuan-Lin; Li, Sha; Wu, Na-Qiong; Li, Jian-Jun
2015-01-01
The coronary artery calcification (CAC) is clinically considered as one of the important predictors of atherosclerosis. Several studies have confirmed that endothelin-1(ET-1) plays an important role in the process of atherosclerosis formation. The aim of this study was to investigate whether big ET-1 is associated with CAC. A total of 510 consecutively admitted patients from February 2011 to May 2012 in Fu Wai Hospital were analyzed. All patients had received coronary computed tomography angiography and then divided into two groups based on the results of coronary artery calcium score (CACS). The clinical characteristics including traditional and calcification-related risk factors were collected and plasma big ET-1 level was measured by ELISA. Patients with CAC had significantly elevated big ET-1 level compared with those without CAC (0.5 ± 0.4 vs. 0.2 ± 0.2, P<0.001). In the multivariate analysis, big ET-1 (Tertile 2, HR = 3.09, 95% CI 1.66-5.74, P <0.001, Tertile3 HR = 10.42, 95% CI 3.62-29.99, P<0.001) appeared as an independent predictive factor of the presence of CAC. There was a positive correlation of the big ET-1 level with CACS (r = 0.567, p<0.001). The 10-year Framingham risk (%) was higher in the group with CACS>0 and the highest tertile of big ET-1 (P<0.01). The area under the receiver operating characteristic curve for the big ET-1 level in predicting CAC was 0.83 (95% CI 0.79-0.87, p<0.001), with a sensitivity of 70.6% and specificity of 87.7%. The data firstly demonstrated that the plasma big ET-1 level was a valuable independent predictor for CAC in our study.
ELM Meets Urban Big Data Analysis: Case Studies
Chen, Huajun; Chen, Jiaoyan
2016-01-01
In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203
The big five factors of personality and their relationship to personality disorders.
Dyce, J A
1997-10-01
Articles examining the relationship between the Big Five factors of personality and personality disorders (PDs) are reviewed. A survey of these studies indicates that there is some agreement regarding the relationship between the Big Five and PDs. However, the level of agreement varies and may be a function of instrumentation, the method of report, or how data have been analyzed. Future research should consider the role of peer-ratings, examine the relationship between PDs and the first-order factors of the Big Five, consider dimensions over and above the Big Five as predictors of PDs.
Joseph, Bellal; Friese, Randall S; Sadoun, Moutamn; Aziz, Hassan; Kulvatunyou, Narong; Pandit, Viraj; Wynne, Julie; Tang, Andrew; O'Keeffe, Terence; Rhee, Peter
2014-04-01
It is becoming a standard practice that any "positive" identification of a radiographic intracranial injury requires transfer of the patient to a trauma center for observation and repeat head computed tomography (RHCT). The purpose of this study was to define guidelines-based on each patient's history, physical examination, and initial head CT findings-regarding which patients require a period of observation, RHCT, or neurosurgical consultation. In our retrospective cohort analysis, we reviewed the records of 3,803 blunt traumatic brain injury patients during a 4-year period. We classified patients according to neurologic examination results, use of intoxicants, anticoagulation status, and initial head CT findings. We then developed brain injury guidelines (BIG) based on the individual patient's need for observation or hospitalization, RHCT, or neurosurgical consultation. A total of 1,232 patients had an abnormal head CT finding. In the BIG 1 category, no patients worsened clinically or radiographically or required any intervention. BIG 2 category had radiographic worsening in 2.6% of the patients. All patients who required neurosurgical intervention (13%) were in BIG 3. There was excellent agreement between assigned BIG and verified BIG. κ statistic is equal to 0.98. We have proposed BIG based on patient's history, neurologic examination, and findings of initial head CT scan. These guidelines must be used as supplement to good clinical examination while managing patients with traumatic brain injury. Prospective validation of the BIG is warranted before its widespread implementation. Epidemiologic study, level III.
Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu
2010-12-29
Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.
Palaniappan, Raghavan U. M.; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P.; Sharma, Yogendra; Chang, Yung-Fu
2010-01-01
Background Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca2+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca2+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. Principal Findings We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9th (Lig A9) and 10th repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca2+ with dissociation constants of 2–4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. Conclusions We demonstrate that the Lig are Ca2+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca2+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca2+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca2+ binding. PMID:21206924
3D interlock design 100% PVDF piezoelectric to improve energy harvesting
NASA Astrophysics Data System (ADS)
Talbourdet, Anaëlle; Rault, François; Lemort, Guillaume; Cochrane, Cédric; Devaux, Eric; Campagne, Christine
2018-07-01
Piezoelectric textile structures based on 100% poly(vinylidene fluoride) (PVDF) were developed and characterised. Multifilaments of 246 tex were produced by melt spinning. The mechanical stretching during the process provides PVDF fibres with a piezoelectric β-phase of up to 97% has been measured by FTIR experiments. Several studies have been carried out on piezoelectric PVDF-based flexible structures (films or textiles), the aim of the study being the investigation of the differences between 2D and 3D woven fabrics from 100% optimised (by optimising piezoelectric crystalline phase) piezoelectric PVDF multifilament yarns. The textile structures were poled after the weaving process, and a maximum output voltage of 2.3 V was observed on 3D woven under compression by DMA tests. Energy harvesting is optimised in a 3D interlock thanks to the stresses of the multifilaments in the thickness. The addition of a resistor makes it possible to measure energy of 10.5 μJ.m‑2 during 10 cycles of stress in compression of 5 s each.
NASA Astrophysics Data System (ADS)
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.
Mediani, Ahmed; Abas, Faridah; Khatib, Alfi; Tan, Chin Ping
2013-08-29
The aim of the study was to analyze the influence of oven thermal processing of Cosmos caudatus on the total polyphenolic content (TPC) and antioxidant capacity (DPPH) of two different solvent extracts (80% methanol, and 80% ethanol). Sonication was used to extract bioactive compounds from this herb. The results showed that the optimised conditions for the oven drying method for 80% methanol and 80% ethanol were 44.5 °C for 4 h with an IC₅₀ of 0.045 mg/mL and 43.12 °C for 4.05 h with an IC₅₀ of 0.055 mg/mL, respectively. The predicted values for TPC under the optimised conditions for 80% methanol and 80% ethanol were 16.5 and 15.8 mg GAE/100 g DW, respectively. The results obtained from this study demonstrate that Cosmos caudatus can be used as a potential source of antioxidants for food and medicinal applications.
Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.
Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D
2011-12-12
We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America
Lin, Sisi; Zhou, Chun; Neufeld, Edward; Wang, Yu-Hua; Xu, Suo-Wen; Lu, Liang; Wang, Ying; Liu, Zhi-Ping; Li, Dong; Li, Cuixian; Chen, Shaorui; Le, Kang; Huang, Heqing; Liu, Peiqing; Moss, Joel; Vaughan, Martha; Shen, Xiaoyan
2013-01-01
Objective Cell surface localization and intracellular trafficking of ATP-binding cassette transporter A-1 (ABCA1) are essential for its function. However, regulation of these activities is still largely unknown. Brefeldin A (BFA), a uncompetitive inhibitor of brefeldin A-inhibited guanine nucleotide-exchange proteins (BIGs), disturbs the intracellular distribution of ABCA1, and thus inhibits cholesterol efflux. This study aimed to define the possible roles of BIGs in regulating ABCA1 trafficking and cholesterol efflux, and further to explore the potential mechanism. Methods and Results By vesicle immunoprecipitation, we found that BIG1 was associated with ABCA1 in vesicles preparation from rat liver. BIG1 depletion reduced surface ABCA1 on HepG2 cells and inhibited by 60% cholesterol release. In contrast, BIG1 over-expression increased surface ABCA1 and cholesterol secretion. With partial restoration of BIG1 through over-expression in BIG1-depleted cells, surface ABCA1 was also restored. Biotinylation and glutathione cleavage revealed that BIG1 siRNA dramatically decreased the internalization and recycling of ABCA1. This novel function of BIG1 was dependent on the guanine nucleotide-exchange activity and achieved through activation of ADP-ribosylation factor 1 (ARF1). Conclusions BIG1, through its ability to activate ARF1, regulates cell surface levels and function of ABCA1, indicating a transcription-independent mechanism for controlling ABCA1 action. PMID:23220274
MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias
2018-01-31
Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.
Badham, George E; Dos Santos, Scott J; Lloyd, Lucinda Ba; Holdstock, Judy M; Whiteley, Mark S
2018-06-01
Background In previous in vitro and ex vivo studies, we have shown increased thermal spread can be achieved with radiofrequency-induced thermotherapy when using a low power and slower, discontinuous pullback. We aimed to determine the clinical success rate of radiofrequency-induced thermotherapy using this optimised protocol for the treatment of superficial venous reflux in truncal veins. Methods Sixty-three patients were treated with radiofrequency-induced thermotherapy using the optimised protocol and were followed up after one year (mean 16.3 months). Thirty-five patients returned for audit, giving a response rate of 56%. Duplex ultrasonography was employed to check for truncal reflux and compared to initial scans. Results In the 35 patients studied, there were 48 legs, with 64 truncal veins treated by radiofrequency-induced thermotherapy (34 great saphenous, 15 small saphenous and 15 anterior accessory saphenous veins). One year post-treatment, complete closure of all previously refluxing truncal veins was demonstrated on ultrasound, giving a success rate of 100%. Conclusions Using a previously reported optimised, low power/slow pullback radiofrequency-induced thermotherapy protocol, we have shown it is possible to achieve a 100% ablation at one year. This compares favourably with results reported at one year post-procedure using the high power/fast pullback protocols that are currently recommended for this device.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Big Data, Big Problems: A Healthcare Perspective.
Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M
2017-01-01
Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."
Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella
2003-03-01
The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.
Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan
2017-08-01
Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
A new effective operator for the hybrid algorithm for solving global optimisation problems
NASA Astrophysics Data System (ADS)
Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac
2018-04-01
Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-06-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Hybrid multicore/vectorisation technique applied to the elastic wave equation on a staggered grid
NASA Astrophysics Data System (ADS)
Titarenko, Sofya; Hildyard, Mark
2017-07-01
In modern physics it has become common to find the solution of a problem by solving numerically a set of PDEs. Whether solving them on a finite difference grid or by a finite element approach, the main calculations are often applied to a stencil structure. In the last decade it has become usual to work with so called big data problems where calculations are very heavy and accelerators and modern architectures are widely used. Although CPU and GPU clusters are often used to solve such problems, parallelisation of any calculation ideally starts from a single processor optimisation. Unfortunately, it is impossible to vectorise a stencil structured loop with high level instructions. In this paper we suggest a new approach to rearranging the data structure which makes it possible to apply high level vectorisation instructions to a stencil loop and which results in significant acceleration. The suggested method allows further acceleration if shared memory APIs are used. We show the effectiveness of the method by applying it to an elastic wave propagation problem on a finite difference grid. We have chosen Intel architecture for the test problem and OpenMP (Open Multi-Processing) since they are extensively used in many applications.
Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.
Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos
2015-01-01
The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.
ERIC Educational Resources Information Center
Mooij, Ton
2004-01-01
Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…
Big-data-based edge biomarkers: study on dynamical drug sensitivity and resistance in individuals.
Zeng, Tao; Zhang, Wanwei; Yu, Xiangtian; Liu, Xiaoping; Li, Meiyi; Chen, Luonan
2016-07-01
Big-data-based edge biomarker is a new concept to characterize disease features based on biomedical big data in a dynamical and network manner, which also provides alternative strategies to indicate disease status in single samples. This article gives a comprehensive review on big-data-based edge biomarkers for complex diseases in an individual patient, which are defined as biomarkers based on network information and high-dimensional data. Specifically, we firstly introduce the sources and structures of biomedical big data accessible in public for edge biomarker and disease study. We show that biomedical big data are typically 'small-sample size in high-dimension space', i.e. small samples but with high dimensions on features (e.g. omics data) for each individual, in contrast to traditional big data in many other fields characterized as 'large-sample size in low-dimension space', i.e. big samples but with low dimensions on features. Then, we demonstrate the concept, model and algorithm for edge biomarkers and further big-data-based edge biomarkers. Dissimilar to conventional biomarkers, edge biomarkers, e.g. module biomarkers in module network rewiring-analysis, are able to predict the disease state by learning differential associations between molecules rather than differential expressions of molecules during disease progression or treatment in individual patients. In particular, in contrast to using the information of the common molecules or edges (i.e.molecule-pairs) across a population in traditional biomarkers including network and edge biomarkers, big-data-based edge biomarkers are specific for each individual and thus can accurately evaluate the disease state by considering the individual heterogeneity. Therefore, the measurement of big data in a high-dimensional space is required not only in the learning process but also in the diagnosing or predicting process of the tested individual. Finally, we provide a case study on analyzing the temporal expression data from a malaria vaccine trial by big-data-based edge biomarkers from module network rewiring-analysis. The illustrative results show that the identified module biomarkers can accurately distinguish vaccines with or without protection and outperformed previous reported gene signatures in terms of effectiveness and efficiency. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Zipfel, Stephan; Wild, Beate; Groß, Gaby; Friederich, Hans-Christoph; Teufel, Martin; Schellberg, Dieter; Giel, Katrin E; de Zwaan, Martina; Dinkel, Andreas; Herpertz, Stephan; Burgmer, Markus; Löwe, Bernd; Tagay, Sefik; von Wietersheim, Jörn; Zeeck, Almut; Schade-Brittinger, Carmen; Schauenburg, Henning; Herzog, Wolfgang
2014-01-11
Psychotherapy is the treatment of choice for patients with anorexia nervosa, although evidence of efficacy is weak. The Anorexia Nervosa Treatment of OutPatients (ANTOP) study aimed to assess the efficacy and safety of two manual-based outpatient treatments for anorexia nervosa--focal psychodynamic therapy and enhanced cognitive behaviour therapy--versus optimised treatment as usual. The ANTOP study is a multicentre, randomised controlled efficacy trial in adults with anorexia nervosa. We recruited patients from ten university hospitals in Germany. Participants were randomly allocated to 10 months of treatment with either focal psychodynamic therapy, enhanced cognitive behaviour therapy, or optimised treatment as usual (including outpatient psychotherapy and structured care from a family doctor). The primary outcome was weight gain, measured as increased body-mass index (BMI) at the end of treatment. A key secondary outcome was rate of recovery (based on a combination of weight gain and eating disorder-specific psychopathology). Analysis was by intention to treat. This trial is registered at http://isrctn.org, number ISRCTN72809357. Of 727 adults screened for inclusion, 242 underwent randomisation: 80 to focal psychodynamic therapy, 80 to enhanced cognitive behaviour therapy, and 82 to optimised treatment as usual. At the end of treatment, 54 patients (22%) were lost to follow-up, and at 12-month follow-up a total of 73 (30%) had dropped out. At the end of treatment, BMI had increased in all study groups (focal psychodynamic therapy 0·73 kg/m(2), enhanced cognitive behaviour therapy 0·93 kg/m(2), optimised treatment as usual 0·69 kg/m(2)); no differences were noted between groups (mean difference between focal psychodynamic therapy and enhanced cognitive behaviour therapy -0·45, 95% CI -0·96 to 0·07; focal psychodynamic therapy vs optimised treatment as usual -0·14, -0·68 to 0·39; enhanced cognitive behaviour therapy vs optimised treatment as usual -0·30, -0·22 to 0·83). At 12-month follow-up, the mean gain in BMI had risen further (1·64 kg/m(2), 1·30 kg/m(2), and 1·22 kg/m(2), respectively), but no differences between groups were recorded (0·10, -0·56 to 0·76; 0·25, -0·45 to 0·95; 0·15, -0·54 to 0·83, respectively). No serious adverse events attributable to weight loss or trial participation were recorded. Optimised treatment as usual, combining psychotherapy and structured care from a family doctor, should be regarded as solid baseline treatment for adult outpatients with anorexia nervosa. Focal psychodynamic therapy proved advantageous in terms of recovery at 12-month follow-up, and enhanced cognitive behaviour therapy was more effective with respect to speed of weight gain and improvements in eating disorder psychopathology. Long-term outcome data will be helpful to further adapt and improve these novel manual-based treatment approaches. German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), German Eating Disorders Diagnostic and Treatment Network (EDNET). Copyright © 2014 Elsevier Ltd. All rights reserved.
Distributed optimisation problem with communication delay and external disturbance
NASA Astrophysics Data System (ADS)
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
Medicines optimisation: priorities and challenges.
Kaufman, Gerri
2016-03-23
Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.
Zhou, Bing-Yang; Guo, Yuan-Lin; Wu, Na-Qiong; Zhu, Cheng-Gang; Gao, Ying; Qing, Ping; Li, Xiao-Lin; Wang, Yao; Dong, Qian; Liu, Geng; Xu, Rui Xia; Cui, Chuan-Jue; Sun, Jing; Li, Jian-Jun
2017-03-01
Big endothelin-1 (ET-1) has been proposed as a novel prognostic indicator of acute coronary syndrome, while its predicting role of cardiovascular outcomes in patients with stable coronary artery disease (CAD) is unclear. A total of 3154 consecutive patients with stable CAD were enrolled and followed up for 24months. The outcomes included all-cause death, non-fatal myocardial infarction, stroke and unplanned revascularization (percutaneous coronary intervention and coronary artery bypass grafting). Baseline big ET-1 was measured using sandwich enzyme immunoassay method. Cox proportional hazard regression analysis and Kaplan-Meier analysis were used to evaluate the prognostic value of big ET-1 on cardiovascular outcomes. One hundred and eighty-nine (5.99%) events occurred during follow-up. Patients were divided into two groups: events group (n=189) and non-events group (n=2965). The results indicated that the events group had higher levels of big ET-1 compared to non-events group. Multivariable Cox proportional hazard regression analysis showed that big ET-1 was positively and statistically correlated with clinical outcomes (Hazard Ratio: 1.656, 95% confidence interval: 1.099-2.496, p=0.016). Additionally, the Kaplan-Meier analysis revealed that patients with higher big ET-1 presented lower event-free survival (p=0.016). The present study firstly suggests that big ET-1 is an independent risk marker of cardiovascular outcomes in patients with stable CAD. And more studies are needed to confirm our findings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rammstedt, Beatrice; Goldberg, Lewis R; Borg, Ingwer
2010-02-01
Previous findings suggest that the Big-Five factor structure is not guaranteed in samples with lower educational levels. The present study investigates the Big-Five factor structure in two large samples representative of the German adult population. In both samples, the Big-Five factor structure emerged only in a blurry way at lower educational levels, whereas for highly educated persons it emerged with textbook-like clarity. Because well-educated persons are most comparable to the usual subjects of psychological research, it might be asked if the Big Five are limited to such persons. Our data contradict this conclusion. There are strong individual differences in acquiescence response tendencies among less highly educated persons. After controlling for this bias the Big-Five model holds at all educational levels.
How to Use TCM Informatics to Study Traditional Chinese Medicine in Big Data Age.
Shi, Cheng; Gong, Qing-Yue; Zhou, Jinhai
2017-01-01
This paper introduces the characteristics and complexity of traditional Chinese medicine (TCM) data, considers that modern big data processing technology has brought new opportunities for the research of TCM, and gives some ideas and methods to apply big data technology in TCM.
ERIC Educational Resources Information Center
De Feyter, Tim; Caers, Ralf; Vigna, Claudia; Berings, Dries
2012-01-01
The main purpose of this study is to unravel the impact of the Big Five personality factors on academic performance. We propose a theoretical model with conditional indirect effects of the Big Five personality factors on academic performance through their impact upon academic motivation. To clarify the mixed results of previous studies concerning…
Jessica S. Lucas; Susan C. Loeb; Patrick G. R. Jodice
2015-01-01
Although several studies have described roost use by Rafinesqueâs big-eared bats (Corynorhinus rafinesquii), few studies have examined roost selection. We examined roost use and selection by Rafinesqueâs big-eared bat at the tree, stand, and landscape scales during the maternity season in pristine old-growth habitat in the Coastal Plain of South...
Application and Exploration of Big Data Mining in Clinical Medicine
Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling
2016-01-01
Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
NASA Astrophysics Data System (ADS)
Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue
2016-11-01
This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.
On some properties of bone functional adaptation phenomenon useful in mechanical design.
Nowak, Michał
2010-01-01
The paper discusses some unique properties of trabecular bone functional adaptation phenomenon, useful in mechanical design. On the basis of the biological process observations and the principle of constant strain energy density on the surface of the structure, the generic structural optimisation system has been developed. Such approach allows fulfilling mechanical theorem for the stiffest design, comprising the optimisations of size, shape and topology, using the concepts known from biomechanical studies. Also the biomimetic solution of multiple load problems is presented.
Ontogeny of Big endothelin-1 effects in newborn piglet pulmonary vasculature.
Liben, S; Stewart, D J; De Marte, J; Perreault, T
1993-07-01
Endothelin-1 (ET-1), a 21-amino acid peptide produced by endothelial cells, results from the cleavage of preproendothelin, generating Big ET-1, which is then cleaved by the ET-converting enzyme (ECE) to form ET-1. Big ET-1, like ET-1, is released by endothelial cells. Big ET-1 is equipotent to ET-1 in vivo, whereas its vasoactive effects are less in vitro. It has been suggested that the effects of Big ET-1 depend on its conversion to ET-1. ET-1 has potent vasoactive effects in the newborn pig pulmonary circulation, however, the effects of Big ET-1 remain unknown. Therefore, we studied the effects of Big ET-1 in isolated perfused lungs from 1- and 7-day-old piglets using the ECE inhibitor, phosphoramidon, and the ETA receptor antagonist, BQ-123Na. The rate of conversion of Big ET-1 to ET-1 was measured using radioimmunoassay. ET-1 (10(-13) to 10(-8) M) produced an initial vasodilation, followed by a dose-dependent potent vasoconstriction (P < 0.001), which was equal at both ages. Big ET-1 (10(-11) to 10(-8) M) also produced a dose-dependent vasoconstriction (P < 0.001). The constrictor effects of Big ET-1 and ET-1 were similar in the 1-day-old, whereas in the 7-day-old, the constrictor effect of Big ET-1 was less than that of ET-1 (P < 0.017).(ABSTRACT TRUNCATED AT 250 WORDS)
Flint, Lorraine E.; Brandt, Justin; Christensen, Allen H.; Flint, Alan L.; Hevesi, Joseph A.; Jachens, Robert; Kulongoski, Justin T.; Martin, Peter; Sneed, Michelle
2012-01-01
The Big Bear Valley, located in the San Bernardino Mountains of southern California, has increased in population in recent years. Most of the water supply for the area is pumped from the alluvial deposits that form the Big Bear Valley groundwater basin. This study was conducted to better understand the thickness and structure of the groundwater basin in order to estimate the quantity and distribution of natural recharge to Big Bear Valley. A gravity survey was used to estimate the thickness of the alluvial deposits that form the Big Bear Valley groundwater basin. This determined that the alluvial deposits reach a maximum thickness of 1,500 to 2,000 feet beneath the center of Big Bear Lake and the area between Big Bear and Baldwin Lakes, and decrease to less than 500 feet thick beneath the eastern end of Big Bear Lake. Interferometric Synthetic Aperture Radar (InSAR) was used to measure pumping-induced land subsidence and to locate structures, such as faults, that could affect groundwater movement. The measurements indicated small amounts of land deformation (uplift and subsidence) in the area between Big Bear Lake and Baldwin Lake, the area near the city of Big Bear Lake, and the area near Sugarloaf, California. Both the gravity and InSAR measurements indicated the possible presence of subsurface faults in subbasins between Big Bear and Baldwin Lakes, but additional data are required for confirmation. The distribution and quantity of groundwater recharge in the area were evaluated by using a regional water-balance model (Basin Characterization Model, or BCM) and a daily rainfall-runoff model (INFILv3). The BCM calculated spatially distributed potential recharge in the study area of approximately 12,700 acre-feet per year (acre-ft/yr) of potential in-place recharge and 30,800 acre-ft/yr of potential runoff. Using the assumption that only 10 percent of the runoff becomes recharge, this approach indicated there is approximately 15,800 acre-ft/yr of total recharge in Big Bear Valley. The INFILv3 model was modified for this study to include a perched zone beneath the root zone to better simulate lateral seepage and recharge in the shallow subsurface in mountainous terrain. The climate input used in the INFILv3 model was developed by using daily climate data from 84 National Climatic Data Center stations and published Parameter Regression on Independent Slopes Model (PRISM) average monthly precipitation maps to match the drier average monthly precipitation measured in the Baldwin Lake drainage basin. This model resulted in a good representation of localized rain-shadow effects and calibrated well to measured lake volumes at Big Bear and Baldwin Lakes. The simulated average annual recharge was about 5,480 acre-ft/yr in the Big Bear study area, with about 2,800 acre-ft/yr in the Big Bear Lake surface-water drainage basin and about 2,680 acre-ft/yr in the Baldwin Lake surface-water drainage basin. One spring and eight wells were sampled and analyzed for chemical and isotopic data in 2005 and 2006 to determine if isotopic techniques could be used to assess the sources and ages of groundwater in the Big Bear Valley. This approach showed that the predominant source of recharge to the Big Bear Valley is winter precipitation falling on the surrounding mountains. The tritium and uncorrected carbon-14 ages of samples collected from wells for this study indicated that the groundwater basin contains water of different ages, ranging from modern to about 17,200-years old.The results of these investigations provide an understanding of the lateral and vertical extent of the groundwater basin, the spatial distribution of groundwater recharge, the processes responsible for the recharge, and the source and age of groundwater in the groundwater basin. Although the studies do not provide an understanding of the detailed water-bearing properties necessary to determine the groundwater availability of the basin, they do provide a framework for the future development of a groundwater model that would help to improve the understanding of the potential hydrologic effects of water-management alternatives in Big Bear Valley.
Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Lewis, Geraint F.; Wu, Xiaofeng
2017-04-01
A vast wealth of literature exists on the topic of rocket trajectory optimisation, particularly in the area of interplanetary trajectories due to its relevance today. Studies on optimising interstellar and intergalactic trajectories are usually performed in flat spacetime using an analytical approach, with very little focus on optimising interstellar trajectories in a general relativistic framework. This paper examines the use of low-acceleration rockets to reach galactic destinations in the least possible time, with a genetic algorithm being employed for the optimisation process. The fuel required for each journey was calculated for various types of propulsion systems to determine the viability of low-acceleration rockets to colonise the Milky Way. The results showed that to limit the amount of fuel carried on board, an antimatter propulsion system would likely be the minimum technological requirement to reach star systems tens of thousands of light years away. However, using a low-acceleration rocket would require several hundreds of thousands of years to reach these star systems, with minimal time dilation effects since maximum velocities only reached about 0.2 c . Such transit times are clearly impractical, and thus, any kind of colonisation using low acceleration rockets would be difficult. High accelerations, on the order of 1 g, are likely required to complete interstellar journeys within a reasonable time frame, though they may require prohibitively large amounts of fuel. So for now, it appears that humanity's ultimate goal of a galactic empire may only be possible at significantly higher accelerations, though the propulsion technology requirement for a journey that uses realistic amounts of fuel remains to be determined.
Díaz-Dinamarca, Diego A; Jerias, José I; Soto, Daniel A; Soto, Jorge A; Díaz, Natalia V; Leyton, Yessica Y; Villegas, Rodrigo A; Kalergis, Alexis M; Vásquez, Abel E
2018-03-01
Group B Streptococcus (GBS) is the leading cause of neonatal meningitis and a common pathogen in livestock and aquaculture industries around the world. Conjugate polysaccharide and protein-based vaccines are under development. The surface immunogenic protein (SIP) is a conserved protein in all GBS serotypes and has been shown to be a good target for vaccine development. The expression of recombinant proteins in Escherichia coli cells has been shown to be useful in the development of vaccines, and the protein purification is a factor affecting their immunogenicity. The response surface methodology (RSM) and Box-Behnken design can optimise the performance in the expression of recombinant proteins. However, the biological effect in mice immunised with an immunogenic protein that is optimised by RSM and purified by low-affinity chromatography is unknown. In this study, we used RSM for the optimisation of the expression of the rSIP, and we evaluated the SIP-specific humoral response and the property to decrease the GBS colonisation in the vaginal tract in female mice. It was observed by NI-NTA chromatography that the RSM increases the yield in the expression of rSIP, generating a better purification process. This improvement in rSIP purification suggests a better induction of IgG anti-SIP immune response and a positive effect in the decreased GBS intravaginal colonisation. The RSM applied to optimise the expression of recombinant proteins with immunogenic capacity is an interesting alternative in the evaluation of vaccines in preclinical phase, which could improve their immune response.
Arzuman, Hafiza; Yusoff, Muhamad Saiful Bahri; Chit, Som Phong
2010-07-01
A cross-sectional descriptive study was conducted among Big Sib students to explore their perceptions of the educational environment at the School of Medical Sciences, Universiti Sains Malaysia (USM) and its weak areas using the Dundee Ready Educational Environment Measure (DREEM) inventory. The DREEM inventory is a validated global instrument for measuring educational environments in undergraduate medical and health professional education. The English version of the DREEM inventory was administered to all Year 2 Big Sib students (n = 67) at a regular Big Sib session. The purpose of the study as well as confidentiality and ethical issues were explained to the students before the questionnaire was administered. The response rate was 62.7% (42 out of 67 students). The overall DREEM score was 117.9/200 (SD 14.6). The DREEM indicated that the Big Sib students' perception of educational environment of the medical school was more positive than negative. Nevertheless, the study also revealed some problem areas within the educational environment. This pilot study revealed that Big Sib students perceived a positive learning environment at the School of Medical Sciences, USM. It also identified some low-scored areas that require further exploration to pinpoint the exact problems. The relatively small study population selected from a particular group of students was the major limitation of the study. This small sample size also means that the study findings cannot be generalised.
NASA Astrophysics Data System (ADS)
Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.
Tesfaye, Tamrat; Sithole, Bruce; Ramjugernath, Deresh; Ndlela, Luyanda
2018-02-01
Commercially processed, untreated chicken feathers are biologically hazardous due to the presence of blood-borne pathogens. Prior to valorisation, it is crucial that they are decontaminated to remove the microbial contamination. The present study focuses on evaluating the best technologies to decontaminate and pre-treat chicken feathers in order to make them suitable for valorisation. Waste chicken feathers were washed with three surfactants (sodium dodecyl sulphate) dimethyl dioctadecyl ammonium chloride, and polyoxyethylene (40) stearate) using statistically designed experiments. Process conditions were optimised using response surface methodology with a Box-Behnken experimental design. The data were compared with decontamination using an autoclave. Under optimised conditions, the microbial counts of the decontaminated and pre-treated chicken feathers were significantly reduced making them safe for handling and use for valorisation applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Biomass supply chain optimisation for Organosolv-based biorefineries.
Giarola, Sara; Patel, Mayank; Shah, Nilay
2014-05-01
This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.
A target recognition method for maritime surveillance radars based on hybrid ensemble selection
NASA Astrophysics Data System (ADS)
Fan, Xueman; Hu, Shengliang; He, Jingbo
2017-11-01
In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.
Optimisation of the mean boat velocity in rowing.
Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P
2012-01-01
In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.
Single-cell Transcriptome Study as Big Data
Yu, Pingjian; Lin, Wei
2016-01-01
The rapid growth of single-cell RNA-seq studies (scRNA-seq) demands efficient data storage, processing, and analysis. Big-data technology provides a framework that facilitates the comprehensive discovery of biological signals from inter-institutional scRNA-seq datasets. The strategies to solve the stochastic and heterogeneous single-cell transcriptome signal are discussed in this article. After extensively reviewing the available big-data applications of next-generation sequencing (NGS)-based studies, we propose a workflow that accounts for the unique characteristics of scRNA-seq data and primary objectives of single-cell studies. PMID:26876720
The Relationship between the Big-Five Model of Personality and Self-Regulated Learning Strategies
ERIC Educational Resources Information Center
Bidjerano, Temi; Dai, David Yun
2007-01-01
The study examined the relationship between the big-five model of personality and the use of self-regulated learning strategies. Measures of self-regulated learning strategies and big-five personality traits were administered to a sample of undergraduate students. Results from canonical correlation analysis indicated an overlap between the…
ERIC Educational Resources Information Center
Wang, Yinying
2017-01-01
Despite abundant data and increasing data availability brought by technological advances, there has been very limited education policy studies that have capitalized on big data--characterized by large volume, wide variety, and high velocity. Drawing on the recent progress of using big data in public policy and computational social science…
Big Earth Data Initiative: Metadata Improvement: Case Studies
NASA Technical Reports Server (NTRS)
Kozimor, John; Habermann, Ted; Farley, John
2016-01-01
Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.
Big Data Analytics Solutions: The Implementation Challenges in the Financial Services Industry
ERIC Educational Resources Information Center
Ojo, Michael O.
2016-01-01
The challenges of Big Data (BD) and Big Data Analytics (BDA) have attracted disproportionately less attention than the overwhelmingly espoused benefits and game-changing promises. While many studies have examined BD challenges across multiple industry verticals, very few have focused on the challenges of implementing BDA solutions. Fewer of these…
Big Data access and infrastructure for modern biology: case studies in data repository utility.
Boles, Nathan C; Stone, Tyler; Bergeron, Charles; Kiehl, Thomas R
2017-01-01
Big Data is no longer solely the purview of big organizations with big resources. Today's routine tools and experimental methods can generate large slices of data. For example, high-throughput sequencing can quickly interrogate biological systems for the expression levels of thousands of different RNAs, examine epigenetic marks throughout the genome, and detect differences in the genomes of individuals. Multichannel electrophysiology platforms produce gigabytes of data in just a few minutes of recording. Imaging systems generate videos capturing biological behaviors over the course of days. Thus, any researcher now has access to a veritable wealth of data. However, the ability of any given researcher to utilize that data is limited by her/his own resources and skills for downloading, storing, and analyzing the data. In this paper, we examine the necessary resources required to engage Big Data, survey the state of modern data analysis pipelines, present a few data repository case studies, and touch on current institutions and programs supporting the work that relies on Big Data. © 2016 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Ling, Eric
The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.
Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project
2005-05-01
thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of
Clinical research of traditional Chinese medicine in big data era.
Zhang, Junhua; Zhang, Boli
2014-09-01
With the advent of big data era, our thinking, technology and methodology are being transformed. Data-intensive scientific discovery based on big data, named "The Fourth Paradigm," has become a new paradigm of scientific research. Along with the development and application of the Internet information technology in the field of healthcare, individual health records, clinical data of diagnosis and treatment, and genomic data have been accumulated dramatically, which generates big data in medical field for clinical research and assessment. With the support of big data, the defects and weakness may be overcome in the methodology of the conventional clinical evaluation based on sampling. Our research target shifts from the "causality inference" to "correlativity analysis." This not only facilitates the evaluation of individualized treatment, disease prediction, prevention and prognosis, but also is suitable for the practice of preventive healthcare and symptom pattern differentiation for treatment in terms of traditional Chinese medicine (TCM), and for the post-marketing evaluation of Chinese patent medicines. To conduct clinical studies involved in big data in TCM domain, top level design is needed and should be performed orderly. The fundamental construction and innovation studies should be strengthened in the sections of data platform creation, data analysis technology and big-data professionals fostering and training.
Doubal, Fergus N; Ali, Myzoon; Batty, G David; Charidimou, Andreas; Eriksdotter, Maria; Hofmann-Apitius, Martin; Kim, Yun-Hee; Levine, Deborah A; Mead, Gillian; Mucke, Hermann A M; Ritchie, Craig W; Roberts, Charlotte J; Russ, Tom C; Stewart, Robert; Whiteley, William; Quinn, Terence J
2017-04-17
Traditional approaches to clinical research have, as yet, failed to provide effective treatments for vascular dementia (VaD). Novel approaches to collation and synthesis of data may allow for time and cost efficient hypothesis generating and testing. These approaches may have particular utility in helping us understand and treat a complex condition such as VaD. We present an overview of new uses for existing data to progress VaD research. The overview is the result of consultation with various stakeholders, focused literature review and learning from the group's experience of successful approaches to data repurposing. In particular, we benefitted from the expert discussion and input of delegates at the 9 th International Congress on Vascular Dementia (Ljubljana, 16-18 th October 2015). We agreed on key areas that could be of relevance to VaD research: systematic review of existing studies; individual patient level analyses of existing trials and cohorts and linking electronic health record data to other datasets. We illustrated each theme with a case-study of an existing project that has utilised this approach. There are many opportunities for the VaD research community to make better use of existing data. The volume of potentially available data is increasing and the opportunities for using these resources to progress the VaD research agenda are exciting. Of course, these approaches come with inherent limitations and biases, as bigger datasets are not necessarily better datasets and maintaining rigour and critical analysis will be key to optimising data use.
Analysis of financing efficiency of big data industry in Guizhou province based on DEA models
NASA Astrophysics Data System (ADS)
Li, Chenggang; Pan, Kang; Luo, Cong
2018-03-01
Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.
Seeing the big picture in nursing: a source of human and professional pride.
Sørensen, Erik E; Hall, Elisabeth O C
2011-10-01
This article presents a discussion of the meaning of the phenomenon of seeing the big picture in nursing. Seeing the big picture is a frequent expression among Danish nurses. It is used when trying to understand a situation in its wider context. However, it has a rather imprecise meaning that might lead to misunderstandings. This paper draws on studies undertaken in the mid 1990s and the early 2000s, but with the current discussion developed in the context of contemporary nursing. Seeing the big picture indicates a desire to do good for patients' and staff. This desire expressed through saying 'I need to see the big picture' is discussed to be a backbone in nursing and nursing leadership and a source of human and professional pride. There is, however, a dilemma if nurses overlook needs of patients that require immediate actions and if a nurse leader does not intercept staff members in crisis. The pride is oscillating between seeing the here-and-now and seeing the long-term in the big picture. We assumed seeing the big picture had to do with practical knowledge. Wonder and reasoning, however, brought us to virtues. Seeing the big picture as mentioned among nursing leaders and clinical nurses demonstrates human and professional pride. The study is useful in organizational, clinical and educational settings in updating policies for nursing, enlarging nurses understanding of practice and training students in understanding nursing practice. © 2011 Blackwell Publishing Ltd.
Bisele, Maria; Bencsik, Martin; Lewis, Martin G C; Barnett, Cleveland T
2017-01-01
Assessment methods in human locomotion often involve the description of normalised graphical profiles and/or the extraction of discrete variables. Whilst useful, these approaches may not represent the full complexity of gait data. Multivariate statistical methods, such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA), have been adopted since they have the potential to overcome these data handling issues. The aim of the current study was to develop and optimise a specific machine learning algorithm for processing human locomotion data. Twenty participants ran at a self-selected speed across a 15m runway in barefoot and shod conditions. Ground reaction forces (BW) and kinematics were measured at 1000 Hz and 100 Hz, respectively from which joint angles (°), joint moments (N.m.kg-1) and joint powers (W.kg-1) for the hip, knee and ankle joints were calculated in all three anatomical planes. Using PCA and DFA, power spectra of the kinematic and kinetic variables were used as a training database for the development of a machine learning algorithm. All possible combinations of 10 out of 20 participants were explored to find the iteration of individuals that would optimise the machine learning algorithm. The results showed that the algorithm was able to successfully predict whether a participant ran shod or barefoot in 93.5% of cases. To the authors' knowledge, this is the first study to optimise the development of a machine learning algorithm.
Bisele, Maria; Bencsik, Martin; Lewis, Martin G. C.
2017-01-01
Assessment methods in human locomotion often involve the description of normalised graphical profiles and/or the extraction of discrete variables. Whilst useful, these approaches may not represent the full complexity of gait data. Multivariate statistical methods, such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA), have been adopted since they have the potential to overcome these data handling issues. The aim of the current study was to develop and optimise a specific machine learning algorithm for processing human locomotion data. Twenty participants ran at a self-selected speed across a 15m runway in barefoot and shod conditions. Ground reaction forces (BW) and kinematics were measured at 1000 Hz and 100 Hz, respectively from which joint angles (°), joint moments (N.m.kg-1) and joint powers (W.kg-1) for the hip, knee and ankle joints were calculated in all three anatomical planes. Using PCA and DFA, power spectra of the kinematic and kinetic variables were used as a training database for the development of a machine learning algorithm. All possible combinations of 10 out of 20 participants were explored to find the iteration of individuals that would optimise the machine learning algorithm. The results showed that the algorithm was able to successfully predict whether a participant ran shod or barefoot in 93.5% of cases. To the authors’ knowledge, this is the first study to optimise the development of a machine learning algorithm. PMID:28886059
Shao, Qingsong; Huang, Yuqiu; Zhou, Aicun; Guo, Haipeng; Zhang, Ailian; Wang, Yong
2014-05-01
Crocus sativus has been used as a traditional Chinese medicine for a long time. The volatile compounds of C. sativus appear biologically active and may act as antioxidants as well as anticonvulsants, antidepressants and antitumour agents. In order to obtain the highest possible yield of essential oils from C. sativus, response surface methodology was employed to optimise the conditions of supercritical fluid carbon dioxide extraction of the volatile compounds from C. sativus. Four factorswere investigated: temperature, pressure, extraction time and carbon dioxide flow rate. Furthermore, the chemical compositions of the volatile compounds extracted by supercritical fluid extraction were compared with those obtained by hydro-distillation and Soxhlet extraction. The optimum extraction conditions were found to be: optimised temperature 44.9°C, pressure 34.9 MPa, extraction time 150.2 min and CO₂ flow rate 10.1 L h⁻¹. Under these conditions, the mean extraction yield was 10.94 g kg⁻¹. The volatile compounds extracted by supercritical fluid extraction and Soxhlet extraction contained a large amount of unsaturated fatty acids. Response surface methodology was successfully applied for supercritical fluid CO₂ extraction optimisation of the volatile compounds from C. sativus. The study showed that pressure and CO₂ flow rate had significant effect on volatile compounds yield produced by supercritical fluid extraction. This study is beneficial for the further research operating on a large scale. © 2013 Society of Chemical Industry.
Escher, Graziela Bragueto; Santos, Jânio Sousa; Rosso, Neiva Deliberali; Marques, Mariza Boscacci; Azevedo, Luciana; do Carmo, Mariana Araújo Vieira; Daguer, Heitor; Molognoni, Luciano; Prado-Silva, Leonardo do; Sant'Ana, Anderson S; da Silva, Marcia Cristina; Granato, Daniel
2018-05-19
This study aimed to optimise the experimental conditions of extraction of the phytochemical compounds and functional properties of Centaurea cyanus petals. The following parameters were determined: the chemical composition (LC-ESI-MS/MS), the effects of pH on the stability and antioxidant activity of anthocyanins, the inhibition of lipid peroxidation, antioxidant activity, anti-hemolytic activity, antimicrobial, anti-hypertensive, and cytotoxic/cytoprotective effect, and the measurements of intracellular reactive oxygen species. Results showed that the temperature and time influenced (p ≤ 0.05) the content of flavonoids, anthocyanins, and FRAP. Only the temperature influenced the total phenolic content, non-anthocyanin flavonoids, and antioxidant activity (DPPH). The statistical approach made it possible to obtain the optimised experimental extraction conditions to increase the level of bioactive compounds. Chlorogenic, caffeic, ferulic, and p-coumaric acids, isoquercitrin, and coumarin were identified as the major compounds in the optimised extract. The optimised extract presented anti-hemolytic and anti-hypertensive activity in vitro, in addition to showing stability and reversibility of anthocyanins and antioxidant activity with pH variation. The C. cyanus petals aqueous extract exhibited high IC 50 and GI 50 (>900 μg/mL) values for all cell lines, meaning low cytotoxicity. Based on the stress oxidative assay, the extract exhibited pro-oxidant action (10-100 μg/mL) but did not cause damage or cell death. Copyright © 2018 Elsevier Ltd. All rights reserved.
Morton, Katherine; Band, Rebecca; van Woezik, Anne; Grist, Rebecca; McManus, Richard J.; Little, Paul; Yardley, Lucy
2018-01-01
Background For behaviour-change interventions to be successful they must be acceptable to users and overcome barriers to behaviour change. The Person-Based Approach can help to optimise interventions to maximise acceptability and engagement. This article presents a novel, efficient and systematic method that can be used as part of the Person-Based Approach to rapidly analyse data from development studies to inform intervention modifications. We describe how we used this approach to optimise a digital intervention for patients with hypertension (HOME BP), which aims to implement medication and lifestyle changes to optimise blood pressure control. Methods In study 1, hypertensive patients (N = 12) each participated in three think-aloud interviews, providing feedback on a prototype of HOME BP. In study 2 patients (N = 11) used HOME BP for three weeks and were then interviewed about their experiences. Studies 1 and 2 were used to identify detailed changes to the intervention content and potential barriers to engagement with HOME BP. In study 3 (N = 7) we interviewed hypertensive patients who were not interested in using an intervention like HOME BP to identify potential barriers to uptake, which informed modifications to our recruitment materials. Analysis in all three studies involved detailed tabulation of patient data and comparison to our modification criteria. Results Studies 1 and 2 indicated that the HOME BP procedures were generally viewed as acceptable and feasible, but also highlighted concerns about monitoring blood pressure correctly at home and making medication changes remotely. Patients in study 3 had additional concerns about the safety and security of the intervention. Modifications improved the acceptability of the intervention and recruitment materials. Conclusions This paper provides a detailed illustration of how to use the Person-Based Approach to refine a digital intervention for hypertension. The novel, efficient approach to analysis and criteria for deciding when to implement intervention modifications described here may be useful to others developing interventions. PMID:29723262
Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen
2017-01-01
Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766
A Method for Decentralised Optimisation in Networks
NASA Astrophysics Data System (ADS)
Saramäki, Jari
2005-06-01
We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.
Thermal buckling optimisation of composite plates using firefly algorithm
NASA Astrophysics Data System (ADS)
Kamarian, S.; Shakeri, M.; Yas, M. H.
2017-07-01
Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.
Tse, Laura; Nicholson, Tom
2014-01-01
The purpose of this study was to improve the literacy achievement of lower socioeconomic status (SES) children by combining explicit phonics with Big Book reading. Big Book reading is a component of the text-centered (or book reading) approach used in New Zealand schools. It involves the teacher in reading an enlarged book to children and demonstrating how to use semantic, syntactic, and grapho-phonic cues to learn to read. There has been little research, however, to find out whether the effectiveness of Big Book reading is enhanced by adding explicit phonics. In this study, a group of 96 second graders from three lower SES primary schools in New Zealand were taught in 24 small groups of four, tracked into three different reading ability levels. All pupils were randomly assigned to one of four treatment conditions: a control group who received math instruction, Big Book reading enhanced with phonics (BB/EP), Big Book reading on its own, and Phonics on its own. The results showed that the BB/EP group made significantly better progress than the Big Book and Phonics groups in word reading, reading comprehension, spelling, and phonemic awareness. In reading accuracy, the BB/EP and Big Book groups scored similarly. In basic decoding skills the BB/EP and Phonics groups scored similarly. The combined instruction, compared with Big Book reading and phonics, appeared to have no comparative disadvantages and considerable advantages. The present findings could be a model for New Zealand and other countries in their efforts to increase the literacy achievement of disadvantaged pupils.
Tse, Laura; Nicholson, Tom
2014-01-01
The purpose of this study was to improve the literacy achievement of lower socioeconomic status (SES) children by combining explicit phonics with Big Book reading. Big Book reading is a component of the text-centered (or book reading) approach used in New Zealand schools. It involves the teacher in reading an enlarged book to children and demonstrating how to use semantic, syntactic, and grapho-phonic cues to learn to read. There has been little research, however, to find out whether the effectiveness of Big Book reading is enhanced by adding explicit phonics. In this study, a group of 96 second graders from three lower SES primary schools in New Zealand were taught in 24 small groups of four, tracked into three different reading ability levels. All pupils were randomly assigned to one of four treatment conditions: a control group who received math instruction, Big Book reading enhanced with phonics (BB/EP), Big Book reading on its own, and Phonics on its own. The results showed that the BB/EP group made significantly better progress than the Big Book and Phonics groups in word reading, reading comprehension, spelling, and phonemic awareness. In reading accuracy, the BB/EP and Big Book groups scored similarly. In basic decoding skills the BB/EP and Phonics groups scored similarly. The combined instruction, compared with Big Book reading and phonics, appeared to have no comparative disadvantages and considerable advantages. The present findings could be a model for New Zealand and other countries in their efforts to increase the literacy achievement of disadvantaged pupils. PMID:25431560
Optimising operational amplifiers by evolutionary algorithms and gm/Id method
NASA Astrophysics Data System (ADS)
Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.
2016-10-01
The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
NASA Astrophysics Data System (ADS)
Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng
2018-04-01
It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.
Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A
2014-11-01
In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.
The development of response surface pathway design to reduce animal numbers in toxicity studies
2014-01-01
Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560
Young people's perception of sexual and reproductive health services in Kenya.
Godia, Pamela M; Olenja, Joyce M; Hofman, Jan J; van den Broek, Nynke
2014-04-15
Addressing the Sexual and Reproductive Health (SRH) needs of young people remains a big challenge. This study explored experiences and perceptions of young people in Kenya aged 10-24 with regard to their SRH needs and whether these are met by the available healthcare services. 18 focus group discussions and 39 in-depth interviews were conducted at health care facilities and youth centres across selected urban and rural settings in Kenya. All interviews were tape recorded and transcribed. Data was analysed using the thematic framework approach. Young people's perceptions are not uniform and show variation between boys and girls as well as for type of service delivery. Girls seeking antenatal care and family planning services at health facilities characterise the available services as good and staff as helpful. However, boys perceive services at health facilities as designed for women and children, and therefore feel uncomfortable seeking services. At youth centres, young people value the non-health benefits including availability of recreational facilities, prevention of idleness, building of confidence, improving interpersonal communication skills, vocational training and facilitation of career progression. Providing young people with SRH information and services through the existing healthcare system, presents an opportunity that should be further optimised. Providing recreational activities via youth centres is reported by young people themselves to not lead to increased uptake of SRH healthcare services. There is need for more research to evaluate how perceived non-health benefits young people do gain from youth centres could lead to improved SRH of young people.
De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B
2011-01-01
An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.
Electroconvulsive therapy stimulus titration: Not all it seems.
Rosenman, Stephen J
2018-05-01
To examine the provenance and implications of seizure threshold titration in electroconvulsive therapy. Titration of seizure threshold has become a virtual standard for electroconvulsive therapy. It is justified as individualisation and optimisation of the balance between efficacy and unwanted effects. Present day threshold estimation is significantly different from the 1960 studies of Cronholm and Ottosson that are its usual justification. The present form of threshold estimation is unstable and too uncertain for valid optimisation or individualisation of dose. Threshold stimulation (lowest dose that produces a seizure) has proven therapeutically ineffective, and the multiples applied to threshold to attain efficacy have never been properly investigated or standardised. The therapeutic outcomes of threshold estimation (or its multiples) have not been separated from simple dose effects. Threshold estimation does not optimise dose due to its own uncertainties and the different short-term and long-term cognitive and memory effects. Potential harms of titration have not been examined. Seizure threshold titration in electroconvulsive therapy is not a proven technique of dose optimisation. It is widely held and practiced; its benefit and harmlessness assumed but unproven. It is a prematurely settled answer to an unsettled question that discourages further enquiry. It is an example of how practices, assumed scientific, enter medicine by obscure paths.
Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks
2015-04-01
UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace
NASA Astrophysics Data System (ADS)
Harré, Michael S.
2013-02-01
Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.
ERIC Educational Resources Information Center
Giacumo, Lisa A.; Breman, Jeroen
2016-01-01
This article provides a systematic literature review about nonprofit and for-profit organizations using "big data" to inform performance improvement initiatives. The review of literature resulted in 4 peer-reviewed articles and an additional 33 studies covering the topic for these contexts. The review found that big data and analytics…
Teoh, Jia-Jie; Iwano, Tomohiko; Kunii, Masataka; Atik, Nur; Avriyanti, Erda; Yoshimura, Shin-ichiro; Moriwaki, Kenta
2017-01-01
BIG1, an activator protein of the small GTPase, Arf, and encoded by the Arfgef1 gene, is one of candidate genes for epileptic encephalopathy. To know the involvement of BIG1 in epileptic encephalopathy, we analyzed BIG1-deficient mice and found that BIG1 regulates neurite outgrowth and brain development in vitro and in vivo. The loss of BIG1 decreased the size of the neocortex and hippocampus. In BIG1-deficient mice, the neuronal progenitor cells (NPCs) and the interneurons were unaffected. However, Tbr1+ and Ctip2+ deep layer (DL) neurons showed spatial-temporal dependent apoptosis. This apoptosis gradually progressed from the piriform cortex (PIR), peaked in the neocortex, and then progressed into the hippocampus from embryonic day 13.5 (E13.5) to E17.5. The upper layer (UL) and DL order in the neocortex was maintained in BIG1-deficient mice, but the excitatory neurons tended to accumulate before their destination layers. Further pulse-chase migration assay showed that the migration defect was non-cell autonomous and secondary to the progression of apoptosis into the BIG1-deficient neocortex after E15.5. In BIG1-deficient mice, we observed an ectopic projection of corticothalamic axons from the primary somatosensory cortex (S1) into the dorsal lateral geniculate nucleus (dLGN). The thalamocortical axons were unable to cross the diencephalon–telencephalon boundary (DTB). In vitro, BIG1-deficient neurons showed a delay in neuronal polarization. BIG1-deficient neurons were also hypersensitive to low dose glutamate (5 μM), and died via apoptosis. This study showed the role of BIG1 in the survival of DL neurons in developing embryonic brain and in the generation of neuronal polarity. PMID:28414797
Medical big data: promise and challenges.
Lee, Choong Ho; Yoon, Hyung-Jin
2017-03-01
The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.
Medical big data: promise and challenges
Lee, Choong Ho; Yoon, Hyung-Jin
2017-01-01
The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology. PMID:28392994
Perspectives on Policy and the Value of Nursing Science in a Big Data Era.
Gephart, Sheila M; Davis, Mary; Shea, Kimberly
2018-01-01
As data volume explodes, nurse scientists grapple with ways to adapt to the big data movement without jeopardizing its epistemic values and theoretical focus that celebrate while acknowledging the authority and unity of its body of knowledge. In this article, the authors describe big data and emphasize ways that nursing science brings value to its study. Collective nursing voices that call for more nursing engagement in the big data era are answered with ways to adapt and integrate theoretical and domain expertise from nursing into data science.
NASA Astrophysics Data System (ADS)
Li, Chenggang; Feng, Yujia
2018-03-01
This paper mainly studies the influence factors of financing efficiency of Guizhou big data industry, and selects the financial and macro data of 20 Guizhou big data enterprises from 2010 to 2016. Using the DEA model to obtain the financing efficiency of Guizhou big data enterprises. A panel data model is constructed to select the six macro and micro influencing factors for panel data analysis. The results show that the external economic environment, the turnover rate of the total assets of the enterprises, the increase of operating income, the increase of the revenue per share of each share of the business income have positive impact on the financing efficiency of of the big data industry in Guizhou. The key to improve the financing efficiency of Guizhou big data enterprises is to improve.
Optimisation of strain selection in evolutionary continuous culture
NASA Astrophysics Data System (ADS)
Bayen, T.; Mairet, F.
2017-12-01
In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.
Optimising predictor domains for spatially coherent precipitation downscaling
NASA Astrophysics Data System (ADS)
Radanovics, S.; Vidal, J.-P.; Sauquet, E.; Ben Daoud, A.; Bontron, G.
2013-10-01
Statistical downscaling is widely used to overcome the scale gap between predictors from numerical weather prediction models or global circulation models and predictands like local precipitation, required for example for medium-term operational forecasts or climate change impact studies. The predictors are considered over a given spatial domain which is rarely optimised with respect to the target predictand location. In this study, an extended version of the growing rectangular domain algorithm is proposed to provide an ensemble of near-optimum predictor domains for a statistical downscaling method. This algorithm is applied to find five-member ensembles of near-optimum geopotential predictor domains for an analogue downscaling method for 608 individual target zones covering France. Results first show that very similar downscaling performances based on the continuous ranked probability score (CRPS) can be achieved by different predictor domains for any specific target zone, demonstrating the need for considering alternative domains in this context of high equifinality. A second result is the large diversity of optimised predictor domains over the country that questions the commonly made hypothesis of a common predictor domain for large areas. The domain centres are mainly distributed following the geographical location of the target location, but there are apparent differences between the windward and the lee side of mountain ridges. Moreover, domains for target zones located in southeastern France are centred more east and south than the ones for target locations on the same longitude. The size of the optimised domains tends to be larger in the southeastern part of the country, while domains with a very small meridional extent can be found in an east-west band around 47° N. Sensitivity experiments finally show that results are rather insensitive to the starting point of the optimisation algorithm except for zones located in the transition area north of this east-west band. Results also appear generally robust with respect to the archive length considered for the analogue method, except for zones with high interannual variability like in the Cévennes area. This study paves the way for defining regions with homogeneous geopotential predictor domains for precipitation downscaling over France, and therefore de facto ensuring the spatial coherence required for hydrological applications.
Gopinathan, Unni; Lewin, Simon; Glenton, Claire
2014-12-01
To identify factors affecting the implementation of large-scale programmes to optimise the health workforce in low- and middle-income countries. We conducted a multicountry case study synthesis. Eligible programmes were identified through consultation with experts and using Internet searches. Programmes were selected purposively to match the inclusion criteria. Programme documents were gathered via Google Scholar and PubMed and from key informants. The SURE Framework - a comprehensive list of factors that may influence the implementation of health system interventions - was used to organise the data. Thematic analysis was used to identify the key issues that emerged from the case studies. Programmes from Brazil, Ethiopia, India, Iran, Malawi, Venezuela and Zimbabwe were selected. Key system-level factors affecting the implementation of the programmes were related to health worker training and continuing education, management and programme support structures, the organisation and delivery of services, community participation, and the sociopolitical environment. Existing weaknesses in health systems may undermine the implementation of large-scale programmes to optimise the health workforce. Changes in the roles and responsibilities of cadres may also, in turn, impact the health system throughout. © 2014 John Wiley & Sons Ltd.
Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina
2010-09-29
To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
NASA Astrophysics Data System (ADS)
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S
2015-01-01
Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738
Omar, J; Boix, A; Kerckhove, G; von Holst, C
2016-12-01
Titanium dioxide (TiO 2 ) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO 2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CF exp ) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min -1 ; DF, 0.4 ml min -1 ; Ft, 5 min; and CF exp , 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated.
Omar, J.; Boix, A.; Kerckhove, G.; von Holst, C.
2016-01-01
ABSTRACT Titanium dioxide (TiO2) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CFexp) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min–1; DF, 0.4 ml min–1; Ft, 5 min; and CFexp, 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated. PMID:27650879
Leucht, Stefan; Winter-van Rossum, Inge; Heres, Stephan; Arango, Celso; Fleischhacker, W Wolfgang; Glenthøj, Birte; Leboyer, Marion; Leweke, F Markus; Lewis, Shôn; McGuire, Phillip; Meyer-Lindenberg, Andreas; Rujescu, Dan; Kapur, Shitij; Kahn, René S; Sommer, Iris E
2015-05-01
Most of the 13 542 trials contained in the Cochrane Schizophrenia Group's register just tested the general efficacy of pharmacological or psychosocial interventions. Studies on the subsequent treatment steps, which are essential to guide clinicians, are largely missing. This knowledge gap leaves important questions unanswered. For example, when a first antipsychotic failed, is switching to another drug effective? And when should we use clozapine? The aim of this article is to review the efficacy of switching antipsychotics in case of nonresponse. We also present the European Commission sponsored "Optimization of Treatment and Management of Schizophrenia in Europe" (OPTiMiSE) trial which aims to provide a treatment algorithm for patients with a first episode of schizophrenia. We searched Pubmed (October 29, 2014) for randomized controlled trials (RCTs) that examined switching the drug in nonresponders to another antipsychotic. We described important methodological choices of the OPTiMiSE trial. We found 10 RCTs on switching antipsychotic drugs. No trial was conclusive and none was concerned with first-episode schizophrenia. In OPTiMiSE, 500 first episode patients are treated with amisulpride for 4 weeks, followed by a 6-week double-blind RCT comparing continuation of amisulpride with switching to olanzapine and ultimately a 12-week clozapine treatment in nonremitters. A subsequent 1-year RCT validates psychosocial interventions to enhance adherence. Current literature fails to provide basic guidance for the pharmacological treatment of schizophrenia. The OPTiMiSE trial is expected to provide a basis for clinical guidelines to treat patients with a first episode of schizophrenia. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C
2018-06-01
Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.
Olivier, A; Girerd, N; Michel, J B; Ketelslegers, J M; Fay, R; Vincent, J; Bramlage, P; Pitt, B; Zannad, F; Rossignol, P
2017-08-15
Increased levels of neuro-hormonal biomarkers predict poor prognosis in patients with acute myocardial infarction (AMI) complicated by left ventricular systolic dysfunction (LVSD). The predictive value of repeated (one-month interval) brain natriuretic peptides (BNP) and big-endothelin 1 (BigET-1) measurements were investigated in patients with LVSD after AMI. In a sub-study of the Eplerenone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival Study (EPHESUS trial), BNP and BigET-1 were measured at baseline and at 1month in 476 patients. When included in the same Cox regression model, baseline BNP (p=0.0003) and BigET-1 (p=0.026) as well as the relative changes (after 1month) from baseline in BNP (p=0.049) and BigET-1 (p=0.045) were predictive of the composite of cardiovascular death or hospitalization for worsening heart failure. Adding baseline and changes in BigET-1 to baseline and changes in BNP led to a significant increase in prognostic reclassification as assessed by integrated discrimination improvement index (5.0%, p=0.01 for the primary endpoint). Both increased baseline and changes after one month in BigET-1 concentrations were shown to be associated with adverse clinical outcomes, independently from BNP baseline levels and one month changes, in patients after recent AMI complicated with LVSD. This novel result may be of clinical interest since such combined biomarker assessment could improve risk stratification and open new avenues for biomarker-guided targeted therapies. In the present study, we report for the first time in a population of patients with reduced LVEF after AMI and signs or symptoms of congestive HF, that increased baseline values of BNP and BigET-1 as well as a further rise of these markers over the first month after AMI, were independently predictive of future cardiovascular events. This approach may therefore be of clinical interest with the potential of improving risk stratification after AMI with reduced LVEF while further opening new avenues for biomarker-guided targeted therapies. Copyright © 2017 Elsevier B.V. All rights reserved.
Big data: survey, technologies, opportunities, and challenges.
Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah
2014-01-01
Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.
Big Data: Survey, Technologies, Opportunities, and Challenges
Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah
2014-01-01
Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682
McKee, Hamish D; Irion, Luciane C D; Carley, Fiona M; Jhanji, Vishal; Brahma, Arun K
2011-10-01
To determine if residual corneal stroma remains on the recipient posterior lamella in big-bubble deep anterior lamellar keratoplasty (DALK). Pneumodissection using the big-bubble technique was carried out on eye-bank corneas mounted on an artificial anterior chamber. Samples that had a successful big-bubble formation were sent for histological evaluation to determine if any residual stroma remained on the Descemet membrane (DM). Big-bubble formation was achieved in 32 donor corneas. Two distinct types of big-bubble were seen: the bubble had either a white margin (30 corneas) or a clear margin (two corneas). The posterior lamellae of all the white margin corneas showed residual stroma on DM with a mean central thickness of 7.0 μm (range 2.6-17.4 μm). The clear margin corneas showed no residual stroma on DM. It should no longer be assumed that big-bubble DALK, where the bubble has a white margin, routinely bares DM. True baring of DM may only occur with the less commonly seen clear margin bubble.
Wang, Tao; Zhang, Jiahai; Zhang, Xuecheng; Xu, Chao; Tu, Xiaoming
2013-01-01
Streptococcus pneumoniae is a pathogen causing acute respiratory infection, otitis media and some other severe diseases in human. In this study, the solution structure of a bacterial immunoglobulin-like (Big) domain from a putative S. pneumoniae surface protein SP0498 was determined by NMR spectroscopy. SP0498 Big domain adopts an eight-β-strand barrel-like fold, which is different in some aspects from the two-sheet sandwich-like fold of the canonical Ig-like domains. Intriguingly, we identified that the SP0498 Big domain was a Ca2+ binding domain. The structure of the Big domain is different from those of the well known Ca2+ binding domains, therefore revealing a novel Ca2+-binding module. Furthermore, we identified the critical residues responsible for the binding to Ca2+. We are the first to report the interactions between the Big domain and Ca2+ in terms of structure, suggesting an important role of the Big domain in many essential calcium-dependent cellular processes such as pathogenesis. PMID:23326635
Device Data Ingestion for Industrial Big Data Platforms with a Case Study †
Ji, Cun; Shao, Qingshi; Sun, Jiao; Liu, Shijun; Pan, Li; Wu, Lei; Yang, Chenglei
2016-01-01
Despite having played a significant role in the Industry 4.0 era, the Internet of Things is currently faced with the challenge of how to ingest large-scale heterogeneous and multi-type device data. In response to this problem we present a heterogeneous device data ingestion model for an industrial big data platform. The model includes device templates and four strategies for data synchronization, data slicing, data splitting and data indexing, respectively. We can ingest device data from multiple sources with this heterogeneous device data ingestion model, which has been verified on our industrial big data platform. In addition, we present a case study on device data-based scenario analysis of industrial big data. PMID:26927121
Lebel, N.; D'Orléans-Juste, P.; Fournier, A.; Sirois, P.
1996-01-01
1. We have studied the conversion of big endothelin-1 (big ET-1), big endothelin-2 (big ET-2) and big endothelin-3 (big ET-3) and characterized the enzyme involved in the conversion of the three peptides in guinea-pig lung parenchyma (GPLP). 2. Endothelin-1 (ET-1), endothelin-2 (ET-2) and endothelin-3 (ET-3) (10 nM to 100 nM) caused similar concentration-dependent contractions of strips of GPLP. 3. Big ET-1 and big ET-2 also elicited concentration-dependent contractions of GPLP strips. In contrast, big ET-3, up to a concentration of 100 nM, failed to induce a contraction of the GPLP. 4. Incubation of strips of GPLP with the dual endothelin converting enzyme (ECE) and neutral endopeptidase (NEP) inhibitor, phosphoramidon (10 microM), as well as two other NEP inhibitors thiorphan (10 microM) or SQ 28,603 (10 microM) decreased by 43% (P < 0.05), 42% (P < 0.05) and 40% (P < 0.05) the contractions induced by 30 nM of big ET-1 respectively. Captopril (10 microM), an angiotensin-converting enzyme inhibitor, had no effect on the contractions induced by big ET-1. 5. The incubation of strips of GPLP with phosphoramidon (10 microM), thiorphan (10 microM) or SQ 28,603 (10 microM) also decreased by 74% (P < 0.05), 34% and 50% (P < 0.05) the contractions induced by 30 nM big ET-2 respectively. As for the contractions induced by big ET-1, captopril (10 microM) had no effect on the concentration-dependent contractions induced by big ET-2. 6. Phosphoramidon (10 microM), thiorphan (10 microM) and SQ 28,603 (10 microM) significantly potentiated the contractions of strips of GPLP induced by both ET-1 (30 nM) and ET-3 (30 nM). However, the enzymatic inhibitors did not significantly affect the contractions induced by ET-2 (30 nM) in this tissue. 7. These results suggest that the effects of big ET-1 and big ET-2 result from the conversion to ET-1 and ET-2 by at least one enzyme sensitive to phosphoramidon, thiorphan and SQ 28,603. This enzyme corresponds possibly to EC 3.4.24.11 (NEP 24.11) and could also be responsible for the degradation of ETs in the GPLP. Images Figure 4 PMID:8825361
Revisiting the Big Six and the Big Five among Hong Kong University Students
ERIC Educational Resources Information Center
Zhang, Li-fang
2008-01-01
The present study replicated investigation of the link between Holland's six career interest types and Costa and McCrae's big five personality traits in a Chinese context. A sample of 79 university students from Hong Kong evaluated their own abilities and responded to the Short-Version Self-Directed Search (SVSDS) and the NEO Five-Factor…
ERIC Educational Resources Information Center
Vreeke, Leonie J.; Muris, Peter
2012-01-01
This study examined the relations between behavioral inhibition, Big Five personality traits, and anxiety disorder symptoms in non-clinical children (n = 147) and clinically anxious children (n = 45) aged 6-13 years. Parents completed the Behavioral Inhibition Questionnaire-Short Form, the Big Five Questionnaire for Children, and the Screen for…
NASA Astrophysics Data System (ADS)
Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.
2015-01-01
Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds (VOC). Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. Thus a method was systematically characterised and improved to quantify carbonyl compounds. Quantification with the present method can be carried out for each carbonyl compound sampled in the aqueous phase regardless of their source. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). The main advantage of the improved method presented in this study is the low detection limit in the range of 0.01 and 0.17 μmol L-1 depending on carbonyl compounds. Furthermore best results were found for extraction with dichloromethane for 30 min followed by derivatisation with PFBHA for 24 h with 0.43 mg mL-1 PFBHA at a pH value of 3. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione.
Kalid, Naser; Zaidan, A A; Zaidan, B B; Salman, Omar H; Hashim, M; Muzammil, H
2017-12-29
The growing worldwide population has increased the need for technologies, computerised software algorithms and smart devices that can monitor and assist patients anytime and anywhere and thus enable them to lead independent lives. The real-time remote monitoring of patients is an important issue in telemedicine. In the provision of healthcare services, patient prioritisation poses a significant challenge because of the complex decision-making process it involves when patients are considered 'big data'. To our knowledge, no study has highlighted the link between 'big data' characteristics and real-time remote healthcare monitoring in the patient prioritisation process, as well as the inherent challenges involved. Thus, we present comprehensive insights into the elements of big data characteristics according to the six 'Vs': volume, velocity, variety, veracity, value and variability. Each of these elements is presented and connected to a related part in the study of the connection between patient prioritisation and real-time remote healthcare monitoring systems. Then, we determine the weak points and recommend solutions as potential future work. This study makes the following contributions. (1) The link between big data characteristics and real-time remote healthcare monitoring in the patient prioritisation process is described. (2) The open issues and challenges for big data used in the patient prioritisation process are emphasised. (3) As a recommended solution, decision making using multiple criteria, such as vital signs and chief complaints, is utilised to prioritise the big data of patients with chronic diseases on the basis of the most urgent cases.
Gender differences in visuospatial planning: an eye movements study.
Cazzato, Valentina; Basso, Demis; Cutini, Simone; Bisiacchi, Patrizia
2010-01-20
Gender studies report a male advantage in several visuospatial abilities. Only few studies however, have evaluated differences in visuospatial planning behaviour with regard to gender. This study was aimed at exploring whether gender may affect the choice of cognitive strategies in a visuospatial planning task and, if oculomotor measures could assist in disentangling the cognitive processes involved. A computerised task based on the travelling salesperson problem paradigm, the Maps test, was used to investigate these issues. Participants were required to optimise time and space of a path travelling among a set of sub-goals in a spatially constrained environment. Behavioural results suggest that there are no gender differences in the initial visual processing of the stimuli, but rather during the execution of the plan, with males showing a shorter execution time and a higher path length optimisation than females. Males often showed changes of heuristics during the execution while females seemed to prefer a constant strategy. Moreover, a better performance in behavioural and oculomotor measures seemed to suggest that males are more able than females in either the optimisation of spatial features or the realisation of the planned scheme. Despite inconclusive findings, the results support previous research and provide insight into the level of cognitive processing involved in navigation and planning tasks, with regard to the influence of gender.
Abdelbary, A.; El-gendy, N. A.; Hosny, A.
2012-01-01
Glipizide is an effective antidiabetic agent, however, it suffers from relatively short biological half-life. To solve this encumbrance, it is a prospective candidate for fabricating glipizide extended release microcapsules. Microencapsulation of glipizde with a coat of alginate alone or in combination with chitosan or carbomer 934P was prepared employing ionotropic gelation process. The prepared microcapsules were evaluated in vitro by microscopical examination, determination of the particle size, yield and microencapsulation efficiency. The filled capsules were assessed for content uniformity and drug release characteristics. Stability study of the optimised formulas was carried out at three different temperatures over 12 weeks. In vivo bioavailability study and hypoglycemic activity of C9 microcapsules were done on albino rabbits. All formulas achieved high yield, microencapsulation efficiency and extended t1/2. C9 and C19 microcapsules attained the most optimised results in all tests and complied with the dissolution requirements for extended release dosage forms. These two formulas were selected for stability studies. C9 exhibited longer shelf-life and hence was chosen for in vivo studies. C9 microcapsules showed an improvement in the drug bioavailability and significant hypoglycemic activity compared to immediate release tablets (Minidiab® 5 mg). The optimised microcapsule formulation developed was found to produce extended antidiabetic activity. PMID:23626387
The Impact of Big Data on Chronic Disease Management.
Bhardwaj, Niharika; Wodajo, Bezawit; Spano, Anthony; Neal, Symaron; Coustasse, Alberto
Population health management and specifically chronic disease management depend on the ability of providers to prevent development of high-cost and high-risk conditions such as diabetes, heart failure, and chronic respiratory diseases and to control them. The advent of big data analytics has potential to empower health care providers to make timely and truly evidence-based informed decisions to provide more effective and personalized treatment while reducing the costs of this care to patients. The goal of this study was to identify real-world health care applications of big data analytics to determine its effectiveness in both patient outcomes and the relief of financial burdens. The methodology for this study was a literature review utilizing 49 articles. Evidence of big data analytics being largely beneficial in the areas of risk prediction, diagnostic accuracy and patient outcome improvement, hospital readmission reduction, treatment guidance, and cost reduction was noted. Initial applications of big data analytics have proved useful in various phases of chronic disease management and could help reduce the chronic disease burden.
Big data in forensic science and medicine.
Lefèvre, Thomas
2018-07-01
In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
The BIG Score and Prediction of Mortality in Pediatric Blunt Trauma.
Davis, Adrienne L; Wales, Paul W; Malik, Tahira; Stephens, Derek; Razik, Fathima; Schuh, Suzanne
2015-09-01
To examine the association between in-hospital mortality and the BIG (composed of the base deficit [B], International normalized ratio [I], Glasgow Coma Scale [G]) score measured on arrival to the emergency department in pediatric blunt trauma patients, adjusted for pre-hospital intubation, volume administration, and presence of hypotension and head injury. We also examined the association between the BIG score and mortality in patients requiring admission to the intensive care unit (ICU). A retrospective 2001-2012 trauma database review of patients with blunt trauma ≤ 17 years old with an Injury Severity score ≥ 12. Charts were reviewed for in-hospital mortality, components of the BIG score upon arrival to the emergency department, prehospital intubation, crystalloids ≥ 20 mL/kg, presence of hypotension, head injury, and disposition. 50/621 (8%) of the study patients died. Independent mortality predictors were the BIG score (OR 11, 95% CI 6-25), prior fluid bolus (OR 3, 95% CI 1.3-9), and prior intubation (OR 8, 95% CI 2-40). The area under the receiver operating characteristic curve was 0.95 (CI 0.93-0.98), with the optimal BIG cutoff of 16. With BIG <16, death rate was 3/496 (0.006, 95% CI 0.001-0.007) vs 47/125 (0.38, 95% CI 0.15-0.7) with BIG ≥ 16, (P < .0001). In patients requiring admission to the ICU, the BIG score remained predictive of mortality (OR 14.3, 95% CI 7.3-32, P < .0001). The BIG score accurately predicts mortality in a population of North American pediatric patients with blunt trauma independent of pre-hospital interventions, presence of head injury, and hypotension, and identifies children with a high probability of survival (BIG <16). The BIG score is also associated with mortality in pediatric patients with trauma requiring admission to the ICU. Copyright © 2015 Elsevier Inc. All rights reserved.
Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P
2008-11-30
In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.
Smart Information Management in Health Big Data.
Muteba A, Eustache
2017-01-01
The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.
Chang, Luye; Connelly, Brian S; Geeza, Alexis A
2012-02-01
Though most personality researchers now recognize that ratings of the Big Five are not orthogonal, the field has been divided about whether these trait intercorrelations are substantive (i.e., driven by higher order factors) or artifactual (i.e., driven by correlated measurement error). We used a meta-analytic multitrait-multirater study to estimate trait correlations after common method variance was controlled. Our results indicated that common method variance substantially inflates trait correlations, and, once controlled, correlations among the Big Five became relatively modest. We then evaluated whether two different theories of higher order factors could account for the pattern of Big Five trait correlations. Our results did not support Rushton and colleagues' (Rushton & Irwing, 2008; Rushton et al., 2009) proposed general factor of personality, but Digman's (1997) α and β metatraits (relabeled by DeYoung, Peterson, and Higgins (2002) as Stability and Plasticity, respectively) produced viable fit. However, our models showed considerable overlap between Stability and Emotional Stability and between Plasticity and Extraversion, raising the question of whether these metatraits are redundant with their dominant Big Five traits. This pattern of findings was robust when we included only studies whose observers were intimately acquainted with targets. Our results underscore the importance of using a multirater approach to studying personality and the need to separate the causes and outcomes of higher order metatraits from those of the Big Five. We discussed the implications of these findings for the array of research fields in which personality is studied.
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
A Call to Investigate the Relationship Between Education and Health Outcomes Using Big Data.
Chahine, Saad; Kulasegaram, Kulamakan Mahan; Wright, Sarah; Monteiro, Sandra; Grierson, Lawrence E M; Barber, Cassandra; Sebok-Syer, Stefanie S; McConnell, Meghan; Yen, Wendy; De Champlain, Andre; Touchie, Claire
2018-06-01
There exists an assumption that improving medical education will improve patient care. While seemingly logical, this premise has rarely been investigated. In this Invited Commentary, the authors propose the use of big data to test this assumption. The authors present a few example research studies linking education and patient care outcomes and argue that using big data may more easily facilitate the process needed to investigate this assumption. The authors also propose that collaboration is needed to link educational and health care data. They then introduce a grassroots initiative, inclusive of universities in one Canadian province and national licensing organizations that are working together to collect, organize, link, and analyze big data to study the relationship between pedagogical approaches to medical training and patient care outcomes. While the authors acknowledge the possible challenges and issues associated with harnessing big data, they believe that the benefits supersede these. There is a need for medical education research to go beyond the outcomes of training to study practice and clinical outcomes as well. Without a coordinated effort to harness big data, policy makers, regulators, medical educators, and researchers are left with sometimes costly guesses and assumptions about what works and what does not. As the social, time, and financial investments in medical education continue to increase, it is imperative to understand the relationship between education and health outcomes.
NASA Astrophysics Data System (ADS)
Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.
2015-06-01
Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds. Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). With the present method quantification can be carried out for each carbonyl compound originating from fog, cloud and rain or sampled from the gas- and particle phase in water. Detection limits between 0.01 and 0.17 μmol L-1 were found, depending on carbonyl compounds. Furthermore, best results were found for the derivatisation with a PFBHA concentration of 0.43 mg mL-1 for 24 h followed by a subsequent extraction with dichloromethane for 30 min at pH = 1. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione after a reaction time of 5 h.
Baumgardner, David E.; Bowles, David E.
2005-01-01
The mayfly (Insecta: Ephemeroptera) and caddisfly (Insecta: Trichoptera) fauna of Big Bend National Park and Big Bend Ranch State Park are reported based upon numerous records. For mayflies, sixteen species representing four families and twelve genera are reported. By comparison, thirty-five species of caddisflies were collected during this study representing seventeen genera and nine families. Although the Rio Grande supports the greatest diversity of mayflies (n=9) and caddisflies (n=14), numerous spring-fed creeks throughout the park also support a wide variety of species. A general lack of data on the distribution and abundance of invertebrates in Big Bend National and State Park is discussed, along with the importance of continuing this type of research. PMID:17119610
Meta-analysis of Big Five personality traits in autism spectrum disorder.
Lodi-Smith, Jennifer; Rodgers, Jonathan D; Cunningham, Sara A; Lopata, Christopher; Thomeer, Marcus L
2018-04-01
The present meta-analysis synthesizes the emerging literature on the relationship of Big Five personality traits to autism spectrum disorder. Studies were included if they (1) either (a) measured autism spectrum disorder characteristics using a metric that yielded a single score quantification of the magnitude of autism spectrum disorder characteristics and/or (b) studied individuals with an autism spectrum disorder diagnosis compared to individuals without an autism spectrum disorder diagnosis and (2) measured Big Five traits in the same sample or samples. Fourteen reviewed studies include both correlational analyses and group comparisons. Eighteen effect sizes per Big Five trait were used to calculate two overall effect sizes per trait. Meta-analytic effects were calculated using random effects models. Twelve effects (per trait) from nine studies reporting correlations yielded a negative association between each Big Five personality trait and autism spectrum disorder characteristics (Fisher's z ranged from -.21 (conscientiousness) to -.50 (extraversion)). Six group contrasts (per trait) from six studies comparing individuals diagnosed with autism spectrum disorder to neurotypical individuals were also substantial (Hedges' g ranged from -.88 (conscientiousness) to -1.42 (extraversion)). The potential impact of personality on important life outcomes and new directions for future research on personality in autism spectrum disorder are discussed in light of results.
ERIC Educational Resources Information Center
Gil, Einat; Gibbs, Alison L.
2017-01-01
In this study, we follow students' modeling and covariational reasoning in the context of learning about big data. A three-week unit was designed to allow 12th grade students in a mathematics course to explore big and mid-size data using concepts such as trend and scatter to describe the relationships between variables in multivariate settings.…
Roger W. Perry; Ronald E. Thill
2008-01-01
Although Eptesicus fuscus (Big Brown Bat) has been widely studied, information on tree-roosting in forests by males is rare, and little information is available on tree roosting in the southeastern United States. Our objectives were to characterize diurnal summer roosts, primarily for male Big Brown Bats, and to determine relationships between forest...
Swami, Viren; Tran, Ulrich S; Brooks, Louise Hoffmann; Kanaan, Laura; Luesse, Ellen-Marlene; Nader, Ingo W; Pietschnig, Jakob; Stieger, Stefan; Voracek, Martin
2013-04-01
Studies have suggested associations between personality dimensions and body image constructs, but these have not been conclusively established. In two studies, we examined direct associations between the Big Five dimensions and two body image constructs, actual-ideal weight discrepancy and body appreciation. In Study 1, 950 women completed measures of both body image constructs and a brief measure of the Big Five dimensions. In Study 2,339 women completed measures of the body image constructs and a more reliable measure of the Big Five. Both studies showed that Neuroticism was significantly associated with actual-ideal weight discrepancy (positively) and body appreciation (negatively) once the effects of body mass index and social status had been accounted for. These results are consistent with the suggestion that Neuroticism is a trait of public health significance requiring attention by body image scholars. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.
NASA Astrophysics Data System (ADS)
Astley, R. J.; Sugimoto, R.; Mustafi, P.
2011-08-01
Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.
Sheridan, Juliette; Coe, Carol Ann; Doran, Peter; Egan, Laurence; Cullen, Garret; Kevans, David; Leyden, Jan; Galligan, Marie; O’Toole, Aoibhlinn; McCarthy, Jane; Doherty, Glen
2018-01-01
Introduction Ulcerative colitis (UC) is a chronic inflammatory bowel disease (IBD), often leading to an impaired quality of life in affected patients. Current treatment modalities include antitumour necrosis factor (anti-TNF) monoclonal antibodies (mABs) including infliximab, adalimumab and golimumab (GLM). Several recent retrospective and prospective studies have demonstrated that fixed dosing schedules of anti-TNF agents often fails to consistently achieve adequate circulating therapeutic drug levels (DL) with consequent risk of immunogenicity treatment failure and potential risk of hospitalisation and colectomy in patients with UC. The design of GLM dose Optimisation to Adequate Levels to Achieve Response in Colitis aims to address the impact of dose escalation of GLM immediately following induction and during the subsequent maintenance phase in response to suboptimal DL or persisting inflammatory burden as represented by raised faecal calprotectin (FCP). Aim The primary aim of the study is to ascertain if monitoring of FCP and DL of GLM to guide dose optimisation (during maintenance) improves rates of patient continuous clinical response and reduces disease activity in UC. Methods and analysis A randomised, multicentred two-arm trial studying the effect of dose optimisation of GLM based on FCP and DL versus treatment as per SMPC. Eligible patients will be randomised in a 1:1 ratio to 1 of 2 treatment groups and shall be treated over a period of 46 weeks. Ethics and dissemination The study protocol was approved by the Research Ethics committee of St. Vincent’s University Hospital. The results will be published in a peer-reviewed journal and shared with the worldwide medical community. Trial registration numbers EudraCT number: 2015-004724-62; Clinicaltrials.gov Identifier: NCT0268772; Pre-results. PMID:29379609
Consideration of plant behaviour in optimal servo-compensator design
NASA Astrophysics Data System (ADS)
Moase, W. H.; Manzie, C.
2016-07-01
Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.
NASA Astrophysics Data System (ADS)
Ferreira, Ana C. M.; Teixeira, Senhorinha F. C. F.; Silva, Rui G.; Silva, Ângela M.
2018-04-01
Cogeneration allows the optimal use of the primary energy sources and significant reductions in carbon emissions. Its use has great potential for applications in the residential sector. This study aims to develop a methodology for thermal-economic optimisation of small-scale micro-gas turbine for cogeneration purposes, able to fulfil domestic energy needs with a thermal power out of 125 kW. A constrained non-linear optimisation model was built. The objective function is the maximisation of the annual worth from the combined heat and power, representing the balance between the annual incomes and the expenditures subject to physical and economic constraints. A genetic algorithm coded in the java programming language was developed. An optimal micro-gas turbine able to produce 103.5 kW of electrical power with a positive annual profit (i.e. 11,925 €/year) was disclosed. The investment can be recovered in 4 years and 9 months, which is less than half of system lifetime expectancy.
Semantic distance as a critical factor in icon design for in-car infotainment systems.
Silvennoinen, Johanna M; Kujala, Tuomo; Jokinen, Jussi P P
2017-11-01
In-car infotainment systems require icons that enable fluent cognitive information processing and safe interaction while driving. An important issue is how to find an optimised set of icons for different functions in terms of semantic distance. In an optimised icon set, every icon needs to be semantically as close as possible to the function it visually represents and semantically as far as possible from the other functions represented concurrently. In three experiments (N = 21 each), semantic distances of 19 icons to four menu functions were studied with preference rankings, verbal protocols, and the primed product comparisons method. The results show that the primed product comparisons method can be efficiently utilised for finding an optimised set of icons for time-critical applications out of a larger set of icons. The findings indicate the benefits of the novel methodological perspective into the icon design for safety-critical contexts in general. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bokhari, Awais; Chuah, Lai Fatt; Yusup, Suzana; Klemeš, Jiří Jaromír; Kamil, Ruzaimah Nik M
2016-01-01
Pretreatment of the high free fatty acid rubber seed oil (RSO) via esterification reaction has been investigated by using a pilot scale hydrodynamic cavitation (HC) reactor. Four newly designed orifice plate geometries are studied. Cavities are induced by assisted double diaphragm pump in the range of 1-3.5 bar inlet pressure. An optimised plate with 21 holes of 1mm diameter and inlet pressure of 3 bar resulted in RSO acid value reduction from 72.36 to 2.64 mg KOH/g within 30 min of reaction time. Reaction parameters have been optimised by using response surface methodology and found as methanol to oil ratio of 6:1, catalyst concentration of 8 wt%, reaction time of 30 min and reaction temperature of 55°C. The reaction time and esterified efficiency of HC was three fold shorter and four fold higher than mechanical stirring. This makes the HC process more environmental friendly. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Discussion of the implementation of MIMIC database in emergency medical study].
Li, Kaiyuan; Feng, Cong; Jia, Lijing; Chen, Li; Pan, Fei; Li, Tanshi
2018-05-01
To introduce Medical Information Mart for Intensive Care (MIMIC) database and elaborate the approach of critically emergent research with big data based on the feature of MIMIC and updated studies both domestic and overseas, we put forward the feasibility and necessity of introducing medical big data to research in emergency. Then we discuss the role of MIMIC database in emergency clinical study, as well as the principles and key notes of experimental design and implementation under the medical big data circumstance. The implementation of MIMIC database in emergency medical research provides a brand new field for the early diagnosis, risk warning and prognosis of critical illness, however there are also limitations. To meet the era of big data, emergency medical database which is in accordance with our national condition is needed, which will provide new energy to the development of emergency medicine.
Fernald, K D S; Pennings, H P G; van den Bosch, J F; Commandeur, H R; Claassen, E
2017-01-01
In the context of increased pharmaceutical innovation deficits and Big Pharma blockbusters' patent expirations, this paper examines the moderating role of firms' absorptive capacity in external innovation activities of Big Pharma firms. The study indicates a rising interest of Big Pharma in acquisitions of and alliances with biotechnology companies. Unfortunately, this increased interest is not reflected in the number of new drugs generated by Big Pharma. We find that acquisitions of biotech companies have negatively affected Big Pharma firms' innovation performance on average but these acquisitions might have a positive effect at higher levels of acquiring firms' absorptive capacity. Moreover, also acquisitions of pharma companies and alliances with biotech companies only have a positive effect on innovation performance at sufficiently high levels of absorptive capacity. The moderating role of absorptive capacity implicates that a tight integration of internal R&D efforts and (unrelated) external knowledge is crucial for harnessing complementarity effects.
Fernald, K. D. S.; Pennings, H. P. G.; van den Bosch, J. F.; Commandeur, H. R.; Claassen, E.
2017-01-01
In the context of increased pharmaceutical innovation deficits and Big Pharma blockbusters’ patent expirations, this paper examines the moderating role of firms’ absorptive capacity in external innovation activities of Big Pharma firms. The study indicates a rising interest of Big Pharma in acquisitions of and alliances with biotechnology companies. Unfortunately, this increased interest is not reflected in the number of new drugs generated by Big Pharma. We find that acquisitions of biotech companies have negatively affected Big Pharma firms’ innovation performance on average but these acquisitions might have a positive effect at higher levels of acquiring firms’ absorptive capacity. Moreover, also acquisitions of pharma companies and alliances with biotech companies only have a positive effect on innovation performance at sufficiently high levels of absorptive capacity. The moderating role of absorptive capacity implicates that a tight integration of internal R&D efforts and (unrelated) external knowledge is crucial for harnessing complementarity effects. PMID:28231332
Sweetapple, Christine; Fu, Guangtao; Butler, David
2014-05-15
This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
O'Brien, Rosaleen; Fitzpatrick, Bridie; Higgins, Maria; Guthrie, Bruce; Watt, Graham; Wyke, Sally
2016-01-01
Objectives To develop and optimise a primary care-based complex intervention (CARE Plus) to enhance the quality of life of patients with multimorbidity in the deprived areas. Methods Six co-design discussion groups involving 32 participants were held separately with multimorbid patients from the deprived areas, voluntary organisations, general practitioners and practice nurses working in the deprived areas. This was followed by piloting in two practices and further optimisation based on interviews with 11 general practitioners, 2 practice nurses and 6 participating multimorbid patients. Results Participants endorsed the need for longer consultations, relational continuity and a holistic approach. All felt that training and support of the health care staff was important. Most participants welcomed the idea of additional self-management support, though some practitioners were dubious about whether patients would use it. The pilot study led to changes including a revised care plan, the inclusion of mindfulness-based stress reduction techniques in the support of practitioners and patients, and the stream-lining of the written self-management support material for patients. Discussion We have co-designed and optimised an augmented primary care intervention involving a whole-system approach to enhance quality of life in multimorbid patients living in the deprived areas. CARE Plus will next be tested in a phase 2 cluster randomised controlled trial. PMID:27068113
Bassick, M.D.; Jones, M.L.
1992-01-01
The study area (see index map of Idaho), part of the Big Lost River drainage basin, is at the northern side of the eastern Snake River Plain. The lower Big Lost River Valley extends from the confluence of Antelope Creek and the Big Lost River to about 4 mi south of Arco and encompasses about 145 mi2 (see map showing water-level contours). The study area is about 18 mi long and, at its narrowest, 4 mi wide. Arco, Butte City, and Moore, with populations of 1,016, 59, and 190, respectively, in 1990, are the only incorporated towns. The entire study area, except the extreme northwestern part, is in Butte City. The study area boundary is where alluvium and colluvium pinch out and abut against the White Knob Mountains (chiefly undifferentiated sedimentary rock with lesser amounts of volcanic rock) on the west and the Lost River Range (chiefly sedimentary rock) on the east. Gravel and sand in the valley fill compose the main aquifer. The southern boundary is approximately where Big Lost River valley fill intercalates with or abuts against basalt of the Snake River Group. Spring ground-water levels and flow in the Big Lost River depend primarily on temperature and the amount and timing of precipitation within the entire drainage basin. Periods of abundant water supply and water shortages are, therefore, related to the amount of annual precipitation. Surface reservoir capacity in the valley (Mackay Reservoir, about 20 mi northwest of Moore) is only 20 percent of the average annual flow of the Big Lost River (Crosthwaite and others, 1970, p. 3). Stored surface water is generally unavailable for carryover from years of abundant water supply to help relieve drought conditions in subsequent years. Many farmers have drilled irrigation wells to supplement surface-water supplies and to increase irrigated acreage. Average annual flow of the Big Lost River below Mackay Reservoir near Mackay (gaging station 13127000, not shown) in water years 1905, 1913-14, and 1920-90 was about 224,600 acre-ft; average annual flow of the Big Lost River near Arco (gaging station 13132500; see map showing water-level contours) in water years 1947-61, 1967-80, and 1983-90 was about 79,000 acre-ft (Harenberg and others, 1991, p. 254-255). Moore Canal and East Side Ditch divert water from the Big Lost River at the Moore Diversion, 3 mi north of Moore (see map showing water-level contours) and supply water for irrigation near the margins of the valley. When water supply is average or greater, water in the Big Lost River flows through the study area and onto the Snake River Plain, where it evaporates or infiltrates into the Snake River Plain aquifer. When water supply is below average, water in the Big Lost River commonly does not reach Arco; rather, it is diverted for irrigation in the interior of the valley, evaporates, or infiltrates to the valley-fill aquifer. This report describes the results of a study by the U.S. Geological Survey, in cooperation with the Idaho Department of Water Resources, to collect hydrologic data needed to help address water-supply problems in the Big Lost River Valley. Work involved (1) field inventory of 81 wells, including 46 irrigation wells; (2) measurement of water levels in 154 wells in March 1991; (3) estimation of annual ground-water pumpage for irrigation from 1984 through 1990; and (4) analysis of results of an aquifer test conducted southwest of Moore. All data obtained during this study may be inspected at the U.S. Geological Survey, Idaho District office, Boise.
First Born amplitude for transitions from a circular state to a state of large (l, m)
NASA Astrophysics Data System (ADS)
Dewangan, D. P.
2005-01-01
The use of cylindrical polar coordinates instead of the conventional spherical polar coordinates enables us to derive compact expressions of the first Born amplitude for some selected sets of transitions from an arbitrary initial circular \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle state to a final \\big|\\psi_{n_f,l_f,m_f}\\big\\rangle state of large (lf, mf). The formulae for \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,n_f-2}\\big\\rangle and \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,n_f-3}\\big\\rangle transitions are expressed in terms of the Jacobi polynomials which serve as suitable starting points for constructing complete solutions over the bound energy levels of hydrogen-like atoms. The formulae for \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,-(n_f-2)}\\big\\rangle and \\big|\\psi_{n_i,n_i-1,n_i-1}\\big\\rangle \\longrightarrow \\big|\\psi_{n_f,n_f-1,-(n_f-3)}\\big\\rangle transitions are in simple algebraic forms and are directly applicable to all possible values of ni and nf. It emerges that the method can be extended to evaluate the first Born amplitude for many other transitions involving states of large (l, m).
Luan, Xiali; Han, Shanrui; Zhou, Wen
2017-01-01
Big data have contributed to deepen our understanding in regards to many human systems, particularly human mobility patterns and the structure and functioning of transportation systems. Resonating the recent call for ‘open big data,’ big data from various sources on a range of scales have become increasingly accessible to the public. However, open big data relevant to travelers within public transit tools remain scarce, hindering any further in-depth study on human mobility patterns. Here, we explore ticketing-website derived data that are publically available but have been largely neglected. We demonstrate the power, potential and limitations of this open big data, using the Chinese high-speed rail (HSR) system as an example. Using an application programming interface, we automatically collected the data on the remaining tickets (RTD) for scheduled trains at the last second before departure in order to retrieve information on unused transit capacity, occupancy rate of trains, and passenger flux at stations. We show that this information is highly useful in characterizing the spatiotemporal patterns of traveling behaviors on the Chinese HSR, such as weekend traveling behavior, imbalanced commuting behavior, and station functionality. Our work facilitates the understanding of human traveling patterns along the Chinese HSR, and the functionality of the largest HSR system in the world. We expect our work to attract attention regarding this unique open big data source for the study of analogous transportation systems. PMID:28574991
Wei, Sheng; Yuan, Jinfu; Qiu, Yanning; Luan, Xiali; Han, Shanrui; Zhou, Wen; Xu, Chi
2017-01-01
Big data have contributed to deepen our understanding in regards to many human systems, particularly human mobility patterns and the structure and functioning of transportation systems. Resonating the recent call for 'open big data,' big data from various sources on a range of scales have become increasingly accessible to the public. However, open big data relevant to travelers within public transit tools remain scarce, hindering any further in-depth study on human mobility patterns. Here, we explore ticketing-website derived data that are publically available but have been largely neglected. We demonstrate the power, potential and limitations of this open big data, using the Chinese high-speed rail (HSR) system as an example. Using an application programming interface, we automatically collected the data on the remaining tickets (RTD) for scheduled trains at the last second before departure in order to retrieve information on unused transit capacity, occupancy rate of trains, and passenger flux at stations. We show that this information is highly useful in characterizing the spatiotemporal patterns of traveling behaviors on the Chinese HSR, such as weekend traveling behavior, imbalanced commuting behavior, and station functionality. Our work facilitates the understanding of human traveling patterns along the Chinese HSR, and the functionality of the largest HSR system in the world. We expect our work to attract attention regarding this unique open big data source for the study of analogous transportation systems.
Is ICRP guidance on the use of reference levels consistent?
Hedemann-Jensen, Per; McEwan, Andrew C
2011-12-01
In ICRP 103, which has replaced ICRP 60, it is stated that no fundamental changes have been introduced compared with ICRP 60. This is true except that the application of reference levels in emergency and existing exposure situations seems to be applied inconsistently, and also in the related publications ICRP 109 and ICRP 111. ICRP 103 emphasises that focus should be on the residual doses after the implementation of protection strategies in emergency and existing exposure situations. If possible, the result of an optimised protection strategy should bring the residual dose below the reference level. Thus the reference level represents the maximum acceptable residual dose after an optimised protection strategy has been implemented. It is not an 'off-the-shelf item' that can be set free of the prevailing situation. It should be determined as part of the process of optimising the protection strategy. If not, protection would be sub-optimised. However, in ICRP 103 some inconsistent concepts have been introduced, e.g. in paragraph 279 which states: 'All exposures above or below the reference level should be subject to optimisation of protection, and particular attention should be given to exposures above the reference level'. If, in fact, all exposures above and below reference levels are subject to the process of optimisation, reference levels appear superfluous. It could be considered that if optimisation of protection below a fixed reference level is necessary, then the reference level has been set too high at the outset. Up until the last phase of the preparation of ICRP 103 the concept of a dose constraint was recommended to constrain the optimisation of protection in all types of exposure situations. In the final phase, the term 'dose constraint' was changed to 'reference level' for emergency and existing exposure situations. However, it seems as if in ICRP 103 it was not fully recognised that dose constraints and reference levels are conceptually different. The use of reference levels in radiological protection is reviewed. It is concluded that the recommendations in ICRP 103 and related ICRP publications seem to be inconsistent regarding the use of reference levels in existing and emergency exposure situations.
NASA Astrophysics Data System (ADS)
Schmitt, R.; Pavim, A.
2009-06-01
The demand for achieving smaller and more flexible production series with a considerable diversity of products complicates the control of the manufacturing tasks, leading to big challenges for the quality assurance systems. The quality assurance strategy that is nowadays used for mass production is unable to cope with the inspection flexibility needed among automated small series production, because the measuring strategy is totally dependent on the fixed features of the few manufactured object variants and on process parameters that can be controlled/compensated during production time. The major challenge faced by a quality assurance system applied to small series production facilities is to guarantee the needed quality level already at the first run, and therefore, the quality assurance system has to adapt itself constantly to the new manufacturing conditions. The small series production culture requires a change of paradigms, because its strategies are totally different from mass production. This work discusses the tight inspection requirements of small series production and presents flexible metrology strategies based on optical sensor data fusion techniques, agent-based systems as well as cognitive and self-optimised systems for assuring the needed quality level of flexible small series. Examples of application scenarios are provided among the automated assembly of solid state lasers and the flexible inspection of automotive headlights.
Design Status of the Cryogenic System and Operation Modes Analysys of the JT-60SA Tokamak
NASA Astrophysics Data System (ADS)
Roussel, P.; Hoa, C.; Lamaison, V.; Michel, F.; Reynaud, P.; Wanner, M.
2010-04-01
The JT-60SA project is part of the Broader Approach Programme signed between Japan and Europe. This superconducting upgrade of the existing JT-60U tokamak in Naka, Japan shall start operation in 2016 and shall support ITER exploitation and research towards DEMO fusion reactor. JT-60SA is currently in the basic design phase. The cryogenic system of JT-60SA shall provide supercritical helium to cool the superconducting magnets and their structures at 4.4 K, and the divertor cryopumps at a temperature of 3.7 K. In addition it shall provide refrigeration for the thermal shields at 80 K and deliver helium at 50 K for the current leads. The equivalent refrigeration capacity at 4.5 K will be about 10 kW. The refrigeration process has to be optimised for different operation modes. During the day, in plasma operation state, the refrigerator will cope with the pulsed heat loads which may increase up to 100% of the average power, representing a big challenge compared to other tokamaks. Fast discharge quenches of the magnets, the impact from baking of the vacuum vessel, cool down and warm up modes are presented from the cryogenic system point of view and their impact on the cryogenic design is described.
NASA Astrophysics Data System (ADS)
Muthuraja, P.; Joselin Beaula, T.; Balachandar, S.; Bena Jothy, V.; Dhandapani, M.
2017-10-01
2-aminoguanidinium 4-methyl benzene sulphonate (AGMS), an organic compound with big assembly of hydrogen bonding interactions was crystallized at room temperature. The structure of the compound was confirmed by FT-IR, NMR and single crystal X-ray diffraction analysis. Numerous hydrogen bonded interactions were found to form supramolecular assemblies in the molecular structure. Fingerprint plots of Hirshfeld surface analysis spells out the interactions in various directions. The molecular structure of AGMS was optimised by HF, MP2 and DFT (B3LYP and CAM-B3LYP) methods at 6-311G (d,p) basis set and the geometrical parameters were compared. Electrostatic potential calculations of the reactants and product confirm the transfer of proton. Optical properties of AGMS were ascertained by UV-Vis absorbance and reflectance spectra. The band gap of AGMS is found to be 2.689 eV. Due to numerous hydrogen bonds, the crystal is thermally stable up to 200 °C. Hyperconjugative interactions which are responsible for the second hyperpolarizabilities were accounted by NBO analysis. Static and frequency dependent optical properties were calculated at HF and DFT methods. The hyperpolarizabilities of AGMS increase rapidly at frequencies 0.0428 and 0.08 a.u. compared to static one. The compound exhibits violet and blue emission.
Visualizing the knowledge structure and evolution of big data research in healthcare informatics.
Gu, Dongxiao; Li, Jingjing; Li, Xingguo; Liang, Changyong
2017-02-01
In recent years, the literature associated with healthcare big data has grown rapidly, but few studies have used bibliometrics and a visualization approach to conduct deep mining and reveal a panorama of the healthcare big data field. To explore the foundational knowledge and research hotspots of big data research in the field of healthcare informatics, this study conducted a series of bibliometric analyses on the related literature, including papers' production trends in the field and the trend of each paper's co-author number, the distribution of core institutions and countries, the core literature distribution, the related information of prolific authors and innovation paths in the field, a keyword co-occurrence analysis, and research hotspots and trends for the future. By conducting a literature content analysis and structure analysis, we found the following: (a) In the early stage, researchers from the United States, the People's Republic of China, the United Kingdom, and Germany made the most contributions to the literature associated with healthcare big data research and the innovation path in this field. (b) The innovation path in healthcare big data consists of three stages: the disease early detection, diagnosis, treatment, and prognosis phase, the life and health promotion phase, and the nursing phase. (c) Research hotspots are mainly concentrated in three dimensions: the disease dimension (e.g., epidemiology, breast cancer, obesity, and diabetes), the technical dimension (e.g., data mining and machine learning), and the health service dimension (e.g., customized service and elderly nursing). This study will provide scholars in the healthcare informatics community with panoramic knowledge of healthcare big data research, as well as research hotspots and future research directions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Quality Attribute-Guided Evaluation of NoSQL Databases: An Experience Report
2014-10-18
detailed technical evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study... big data , software systems [Agarwal 2011]. Internet-born organizations such as Google and Amazon are at the cutting edge of this revolution...Chang 2008], along with those of numerous other big data innovators, have made a variety of open source and commercial data management technologies
Can the big five factors of personality predict lymphocyte counts?
Ožura, Ana; Ihan, Alojz; Musek, Janek
2012-03-01
Psychological stress is known to affect the immune system. The Limbic Hypothalamic Pituitary Adrenal (LHPA) axis has been identified as the principal path of the bidirectional communication between the immune system and the central nervous system with significant psychological activators. Personality traits acted as moderators of the relationship between life conflicts and psychological distress. This study focuses on the relationship between the Big Five factors of personality and immune regulation as indicated by Lymphocyte counts. Our study included 32 professional soldiers from the Slovenian Army that completed the Big Five questionnaire (Goldberg IPIP-300). We also assessed their white blood cell counts with a detailed lymphocyte analysis using flow cytometry. The correlations between personality variables and immune system parameters were calculated. Furthermore, regression analyses were performed using personality variables as predictors and immune parameters as criteria. The results demonstrated that the model using the Big Five factors as predictors of Lymphocyte counts is significant in predicting the variance in NK and B cell counts. Agreeableness showed the strongest predictive function. The results offer support for the theoretical models that stressed the essential links between personality and immune regulation. Further studies with larger samples examining the Big five factors and immune system parameters are needed.
... study Earth? What can trees tell us about climate change? Why does NASA care about food? Games Activities People Videos Mystery Big Questions What does global climate change mean? What is the big deal with carbon? ...
The seasonal behaviour of carbon fluxes in the Amazon: fusion of FLUXNET data and the ORCHIDEE model
NASA Astrophysics Data System (ADS)
Verbeeck, H.; Peylin, P.; Bacour, C.; Ciais, P.
2009-04-01
Eddy covariance measurements at the Santarém (km 67) site revealed an unexpected seasonal pattern in carbon fluxes which could not be simulated by existing state-of-the-art global ecosystem models (Saleska et al., Sciece 2003). An unexpected high carbon uptake was measured during dry season. In contrast, carbon release was observed in the wet season. There are several possible (combined) underlying mechanisms of this phenomenon: (1) an increased soil respiration due to soil moisture in the wet season, (2) increased photosynthesis during the dry season due to deep rooting, hydraulic lift, increased radiation and/or a leaf flush. The objective of this study is to optimise the ORCHIDEE model using eddy covariance data in order to be able to mimic the seasonal response of carbon fluxes to dry/wet conditions in tropical forest ecosystems. By doing this, we try to identify the underlying mechanisms of this seasonal response. The ORCHIDEE model is a state of the art mechanistic global vegetation model that can be run at local or global scale. It calculates the carbon and water cycle in the different soil and vegetation pools and resolves the diurnal cycle of fluxes. ORCHIDEE is built on the concept of plant functional types (PFT) to describe vegetation. To bring the different carbon pool sizes to realistic values, spin-up runs are used. ORCHIDEE uses climate variables as drivers together with a number of ecosystem parameters that have been assessed from laboratory and in situ experiments. These parameters are still associated with a large uncertainty and may vary between and within PFTs in a way that is currently not informed or captured by the model. Recently, the development of assimilation techniques allows the objective use of eddy covariance data to improve our knowledge of these parameters in a statistically coherent approach. We use a Bayesian optimisation approach. This approach is based on the minimization of a cost function containing the mismatch between simulated model output and observations as well as the mismatch between a priori and optimized parameters. The parameters can be optimized on different time scales (annually, monthly, daily). For this study the model is optimised at local scale for 5 eddy flux sites: 4 sites in Brazil and one in French Guyana. The seasonal behaviour of C fluxes in response to wet and dry conditions differs among these sites. Key processes that are optimised include: the effect of the soil water on heterotrophic soil respiration, the effect of soil water availability on stomatal conductance and photosynthesis, and phenology. By optimising several key parameters we could improve the simulation of the seasonal pattern of NEE significantly. Nevertheless, posterior parameters should be interpreted with care, because resulting parameter values might compensate for uncertainties on the model structure or other parameters. Moreover, several critical issues appeared during this study e.g. how to assimilate latent and sensible heat data, when the energy balance is not closed in the data? Optimisation of the Q10 parameter showed that on some sites respiration was not sensitive at all to temperature, which show only small variations in this region. Considering this, one could question the reliability of the partitioned fluxes (GPP/Reco) at these sites. This study also tests if there is coherence between optimised parameter values of different sites within the tropical forest PFT and if the forward model response to climate variations is similar between sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigorievsky, V I; Tezadov, Ya A
2016-03-31
The reported study is aimed at increasing the power in the transmission path of a lidar with Raman amplification for longpath sensing of methane by optimising the frequency-modulated characteristics of the output radiation. The pump current of the used distributed-feedback master laser was modulated by a linearfrequency signal with simultaneous application of a non-synchronous high-frequency signal. For such a modulation regime, the Raman amplifier provided the mean output power of 2.5 W at a wavelength of 1650 nm. The spectral broadening did not significantly decrease the lidar sensitivity at long paths. (lidars)
Stochastic optimisation of water allocation on a global scale
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.
2014-05-01
Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.
Olivero, Sergio J Pérez; Trujillo, Juan P Pérez
2011-06-24
A new analytical method for the determination of nine short-chain fatty acids (acetic, propionic, isobutyric, butyric, isovaleric, 2-methylbutyric, hexanoic, octanoic and decanoic acids) in wines using the automated HS/SPME-GC-ITMS technique was developed and optimised. Five different SPME fibers were tested and the influence of different factors such as temperature and time of extraction, temperature and time of desorption, pH, strength ionic, tannins, anthocyans, SO(2), sugar and ethanol content were studied and optimised using model solutions. Some analytes showed matrix effect so a study of recoveries was performed. The proposed HS/SPME-GC-ITMS method, that covers the concentration range of the different analytes in wines, showed wide linear ranges, values of repeatability and reproducibility lower than 4.0% of RSD and detection limits between 3 and 257 μgL(-1), lower than the olfactory thresholds. The optimised method is a suitable technique for the quantitative analysis of short-chain fatty acids from the aliphatic series in real samples of white, rose and red wines. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
Prediction of road traffic death rate using neural networks optimised by genetic algorithm.
Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari
2015-01-01
Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.
Formulation of multiparticulate systems as lyophilised orally disintegrating tablets.
Alhusban, Farhan; Perrie, Yvonne; Mohammed, Afzal R
2011-11-01
The current study aimed to exploit the electrostatic associative interaction between carrageenan and gelatin to optimise a formulation of lyophilised orally disintegrating tablets (ODTs) suitable for multiparticulate delivery. A central composite face centred (CCF) design was applied to study the influence of formulation variables (gelatin, carrageenan and alanine concentrations) on the crucial responses of the formulation (disintegration time, hardness, viscosity and pH). The disintegration time and viscosity were controlled by the associative interaction between gelatin and carrageenan upon hydration which forms a strong complex that increases the viscosity of the stock solution and forms tablet with higher resistant to disintegration in aqueous medium. Therefore, the levels of carrageenan, gelatin and their interaction in the formulation were the significant factors. In terms of hardness, increasing gelatin and alanine concentration was the most effective way to improve tablet hardness. Accordingly, optimum concentrations of these excipients were needed to find the best balance that fulfilled all formulation requirements. The revised model showed high degree of predictability and optimisation reliability and therefore was successful in developing an ODT formulation with optimised properties that were able deliver enteric coated multiparticulates of omeprazole without compromising their functionality. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Uyttenhove, W.; Sobolev, V.; Maschek, W.
2011-09-01
A potential option for neutralization of minor actinides (MA) accumulated in spent nuclear fuel of light water reactors (LWRs) is their transmutation in dedicated accelerator-driven systems (ADS). A promising fuel candidate dedicated to MA transmutation is a CERMET composite with Mo metal matrix and (Pu, Np, Am, Cm)O 2-x fuel particles. Results of optimisation studies of the CERMET fuel targeting to increasing the MA transmutation efficiency of the EFIT (European Facility for Industrial Transmutation) core are presented. In the adopted strategy of MA burning the plutonium (Pu) balance of the core is minimized, allowing a reduction in the reactivity swing and the peak power form-factor deviation and an extension of the cycle duration. The MA/Pu ratio is used as a variable for the fuel optimisation studies. The efficiency of MA transmutation is close to the foreseen theoretical value of 42 kg TW -1 h -1 when level of Pu in the actinide mixture is about 40 wt.%. The obtained results are compared with the reference case of the EFIT core loaded with the composite CERCER fuel, where fuel particles are incorporated in a ceramic magnesia matrix. The results of this study offer additional information for the EFIT fuel selection.
Soh, Josephine Lay Peng; Grachet, Maud; Whitlock, Mark; Lukas, Timothy
2013-02-01
This is a study to fully assess a commercially available co-processed mannitol for its usefulness as an off-the-shelf excipient for developing orally disintegrating tablets (ODTs) by direct compression on a pilot scale (up to 4 kg). This work encompassed material characterization, formulation optimisation and process robustness. Overall, this co-processed mannitol possessed favourable physical attributes including low hygroscopicity and compactibility. Two design-of-experiments (DoEs) were used to screen and optimise the placebo formulation. Xylitol and crospovidone concentrations were found to have the most significant impact on disintegration time (p < 0.05). Higher xylitol concentrations retarded disintegration. Avicel PH102 promoted faster disintegration than PH101, at higher levels of xylitol. Without xylitol, higher crospovidone concentrations yielded faster disintegration and reduced tablet friability. Lubrication sensitivity studies were later conducted at two fill loads, three levels for lubricant concentration and number of blend rotations. Even at 75% fill load, the design space plot showed that 1.5% lubricant and 300 blend revolutions were sufficient to manufacture ODTs with ≤ 0.1% friability and disintegrated within 15 s. This study also describes results using a modified disintegration method based on the texture analyzer as an alternative to the USP method.
-Omic and Electronic Health Record Big Data Analytics for Precision Medicine.
Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D; Venugopalan, Janani; Hoffman, Ryan; Wang, May D
2017-02-01
Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of healthcare. In this paper, we present -omic and EHR data characteristics, associated challenges, and data analytics including data preprocessing, mining, and modeling. To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Big data analytics is able to address -omic and EHR data challenges for paradigm shift toward precision medicine. Big data analytics makes sense of -omic and EHR data to improve healthcare outcome. It has long lasting societal impact.
Anger and hostility from the perspective of the Big Five personality model.
Sanz, Jesús; García-Vera, María Paz; Magán, Inés
2010-06-01
This study was aimed at examining the relationships of the personality dimensions of the five-factor model or Big Five with trait anger and with two specific traits of hostility (mistrust and confrontational attitude), and identifying the similarities and differences between trait anger and hostility in the framework of the Big Five. In a sample of 353 male and female adults, the Big Five explained a significant percentage of individual differences in trait anger and hostility after controlling the effects due to the relationship between both constructs and content overlapping across scales. In addition, trait anger was primarily associated with neuroticism, whereas mistrust and confrontational attitude were principally related to low agreeableness. These findings are discussed in the context of the anger-hostility-aggression syndrome and the capability of the Big Five for organizing and clarifying related personality constructs.
Recent Development in Big Data Analytics for Business Operations and Risk Management.
Choi, Tsan-Ming; Chan, Hing Kai; Yue, Xiaohang
2017-01-01
"Big data" is an emerging topic and has attracted the attention of many researchers and practitioners in industrial systems engineering and cybernetics. Big data analytics would definitely lead to valuable knowledge for many organizations. Business operations and risk management can be a beneficiary as there are many data collection channels in the related industrial systems (e.g., wireless sensor networks, Internet-based systems, etc.). Big data research, however, is still in its infancy. Its focus is rather unclear and related studies are not well amalgamated. This paper aims to present the challenges and opportunities of big data analytics in this unique application domain. Technological development and advances for industrial-based business systems, reliability and security of industrial systems, and their operational risk management are examined. Important areas for future research are also discussed and revealed.
Research on Durability of Big Recycled Aggregate Self-Compacting Concrete Beam
NASA Astrophysics Data System (ADS)
Gao, Shuai; Liu, Xuliang; Li, Jing; Li, Juan; Wang, Chang; Zheng, Jinkai
2018-03-01
Deflection and crack width are the most important durability indexes, which play a pivotal role in the popularization and application of the Big Recycled Aggregate Self-Compacting Concrete technology. In this research, comparative study on the Big Recycled Aggregate Self-Compacting Concrete Beam and ordinary concrete beam were conducted by measuring the deflection and crack width index. The results show that both kind of concrete beams have almost equal mid-span deflection value and are slightly different in the maximum crack width. It indicates that the Big Recycled Aggregate Self-Compacting Concrete Beam will be a good substitute for ordinary concrete beam in some less critical structure projects.
Big five personality factors and suicide rates in the United States: a state-level analysis.
Voracek, Martin
2009-08-01
Partly replicating findings from several cross-national studies (of Lester and of Voracek) on possible aggregate-level associations between personality and suicide prevalence, state-level analysis within the United States yielded significantly negative associations between the Big Five factor of Neuroticism and suicide rates. This effect was observed for historical as well as contemporary suicide rates of the total or the elderly population and was preserved with controls for the four other Big Five factors and measures of state wealth. Also conforming to cross-national findings, the Big Five factors of Agreeableness and Extraversion were negatively, albeit not reliably, associated with suicide rates.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.
Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir
2014-01-01
Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Heck, V.
2014-09-01
Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
NASA Astrophysics Data System (ADS)
Vicari, Rosa; Schertzer, Daniel; Deutsch, Jean-Claude; Moilleron, Regis
2015-04-01
Since 1990s up to now, climate and environmental science communication has gradually become a priority of policy programmes, a consolidated subject of training and education, a developed and greatly expanded field of professional practices. However, in contrast to this very fast evolution there is presumably a deficit in terms of research and reflection on objective tools to assess the quality and impact of communication activities. The quality of communication in the field of science has become more and more challenging due to the fact that the role of traditional mediators (e.g. well reputed newspapers or broadcasters, science museums), that used to be considered quality guarantors, has now become marginal. Today, a new generation of communication professionals tend to be employed by research institutes to respond to a stronger request to develop accountable research projects, to increase transparency and trust and to disseminate and implementation of research findings. This research aims to understand how communication strategies, addressed to the general public, can optimise the impact of research findings in hydrology for resilient cities. The research will greatly benefit from the development of automated analysis of unstructured Big Data that allows the exploration of huge amounts of digital communication data: blogs, social networks postings, public speeches, press releases, publications, articles... Furthermore, these techniques facilitate the crossing of socio-economic and physical-environmental data and possibly lead to the identification of existing correlations. Case studies correspond to those of several research projects under the umbrella of the Chair "Hydrology for resilient cities" aimed to develop and test new solutions in urban hydrology that will contribute to the resilience of our cities to extreme weather. This research was initiated in the framework of the Interreg IVB project RAINGAIN and pursued in the project Blue Green Dream of the EU KIC Climate and in worldwide collaborations (e.g. TOMACS). These projects involve awareness raising and capacity building activities aimed to stimulate cooperation between scientists, professionals (e.g. water managers, urban planners) and beneficiaries (e.g. concerned citizens, policy makers). They give credence to the fact that the key question is not if geoscientists can act communicators, but how to develop synergies with various actors of geoscience communication with the help of an enlargement of their scientific practices, rather than a detrimental reduction of them.
Effects of a Preschool and Kindergarten Mathematics Curriculum: Big Math for Little Kids
ERIC Educational Resources Information Center
Presser, Ashley Lewis; Clements, Margaret; Ginsburg, Herbert; Ertle, Barbrina
2012-01-01
"Research Findings: Big Math for Little Kids" ("BMLK") is a mathematics curriculum designed for 4- and 5-year-old children. In this study, the curriculum was evaluated for effectiveness over two years, using a cluster-randomized controlled study. Over 750 children participated in the study and experienced either the…
Smaggus, Andrew; Mrkobrada, Marko; Marson, Alanna; Appleton, Andrew
2018-01-01
The quality and safety movement has reinvigorated interest in optimising morbidity and mortality (M&M) rounds. We performed a systematic review to identify effective means of updating M&M rounds to (1) identify and address quality and safety issues, and (2) address contemporary educational goals. Relevant databases (Medline, Embase, PubMed, Education Resource Information Centre, Cumulative Index to Nursing and Allied Health Literature, Healthstar, and Global Health) were searched to identify primary sources. Studies were included if they (1) investigated an intervention applied to M&M rounds, (2) reported outcomes relevant to the identification of quality and safety issues, or educational outcomes relevant to quality improvement (QI), patient safety or general medical education and (3) included a control group. Study quality was assessed using the Medical Education Research Study Quality Instrument and Newcastle-Ottawa Scale-Education instruments. Given the heterogeneity of interventions and outcome measures, results were analysed thematically. The final analysis included 19 studies. We identified multiple effective strategies (updating objectives, standardising elements of rounds and attaching rounds to a formal quality committee) to optimise M&M rounds for a QI/safety purpose. These efforts were associated with successful integration of quality and safety content into rounds, and increased implementation of QI interventions. Consistent effects on educational outcomes were difficult to identify, likely due to the use of methodologies ill-fitted for educational research. These results are encouraging for those seeking to optimise the quality and safety mission of M&M rounds. However, the inability to identify consistent educational effects suggests the investigation of M&M rounds could benefit from additional methodologies (qualitative, mixed methods) in order to understand the complex mechanisms driving learning at M&M rounds. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Baay, Pieter E; van Aken, Marcel A G; de Ridder, Denise T D; van der Lippe, Tanja
2014-07-01
The school-to-work transition constitutes a central developmental task for adolescents. The role of Big Five personality traits in this has received some scientific attention, but prior research has been inconsistent and paid little attention to mechanisms through which personality traits influence job-search outcomes. The current study proposed that the joint effects of Big Five personality traits and social capital (i.e., available resources through social relations) would shed more light on adolescents' job-search outcomes. Analyses on 685 Dutch vocational training graduates showed that extraversion and emotional stability were related to better job-search outcomes after graduation. Some relations between Big Five personality traits and job-search outcomes were explained by social capital, but no relations were dependent on social capital. Social capital had a direct relation with the number of job offers. Contrary to popular belief, this study shows that Big Five personality traits and social capital relate to job-search outcomes largely independently. Copyright © 2014 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Thielmann, Isabel; Hilbig, Benjamin E; Zettler, Ingo; Moshagen, Morten
2017-12-01
Recent developments in personality research led to the proposition of two alternative six-factor trait models, the HEXACO model and the Big Six model. However, given the lack of direct comparisons, it is unclear whether the HEXACO and Big Six factors are distinct or essentially equivalent, that is, whether corresponding inventories measure similar or distinct personality traits. Using Structural Equation Modeling (Study 1), we found substantial differences between the traits as measured via the HEXACO-60 and the 30-item Questionnaire Big Six (30QB6), particularly for Honesty-Humility and Honesty-Propriety (both model's critical difference to the Big Five approach). This distinction was further supported by Study 2, showing differential capabilities of the HEXACO-60 and the 30QB6 to account for several criteria representing the theoretical core of Honesty-Humility and/or Honesty-Propriety. Specifically, unlike the indicator of Honesty-Humility, the indicator of Honesty-Propriety showed low predictive power for some conceptually relevant criteria, suggesting a limited validity of the 30QB6.
Big Five Personality Factors and Facets as Predictors of Openness to Diversity.
Han, Suejung; Pistole, M Carole
2017-11-17
Openness to diversity is a crucial component of cultural competence needed in the increasingly diversified modern society and a necessary condition for benefitting from diversity contacts and interventions (e.g., diversity training, cultural courses). Responding to the recent call for more research on personality and its relation to diversity outcomes, we examined the associations between Big Five personality (i.e., Openness to Experience, Agreeableness, Extraversion, Neuroticism, and Conscientiousness) higher order factors and lower order facets and universal-diverse orientation (i.e., open attitude of appreciating human universality and diversity; Miville et al., 1999 ). In the Study 1 (N = 338) web survey on Big Five factors, Openness to Experience and Agreeableness were associated with universal-diverse orientation significantly. In the Study 2 (N = 176) paper survey on both Big Five factors and facets, Openness to Experience, low Neuroticism, and Conscientiousness, and various lower-order facets of all the Big Five personality were associated with universal-diverse orientation significantly. Practical implications were suggested on how personality facets could be incorporated into current diversity interventions to enhance their effectiveness of promoting openness to diversity.
Big-Data Based Decision-Support Systems to Improve Clinicians' Cognition.
Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin
2016-01-01
Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians' cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems.
Big-Data Based Decision-Support Systems to Improve Clinicians’ Cognition
Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin
2016-01-01
Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians’ cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems. PMID:27990498
Banerjee, Sanjib; Li, He J; Tsaousis, Konstantinos T; Tabin, Geoffrey C
2016-11-04
To report the achievement rate of bare Descemet membrane (DM) dissection with the help of microbubble incision technique in eyes with failed big bubble formation and to investigate the mechanism of the microbubble rescue technique through ex vivo imaging of human cadaver corneas. This retrospective clinical study included 80 eyes of 80 patients that underwent deep anterior lamellar keratoplasty (DALK). In 22/80 (27.5%) cases, big bubble dissection failed. After puncturing the microbubbles, viscodissection helped to achieve separation of DM from the remaining stroma. In addition, an ex vivo study with human cadaver cornea specimens, gross photography, and anterior segment optical coherence tomography imaging was accomplished ex vivo to explore the mechanism of this method. Microbubble dissection technique led to successful DALK in 19 of 22 cases of failed big bubble. Microperforation occurred in 3 eyes. Deep anterior lamellar keratoplasty was completed without any complications in 2 out of the 3 eyes with microperforation. In 1 eye, conversion to penetrating keratoplasty was required. Microbubble-guided viscodissection achieved 95.4% (21/22) success in exposing bare DM in failed big-bubble cases of DALK. Anterior segment optical coherence tomography imaging results of cadaver eyes showed where these microbubbles were concentrated and their related size. Microbubble-guided DALK should be considered an effective rescue technique in achieving bare DM in eyes with failed big bubble. Our ex vivo experiment illustrated the possible alterations in cornea anatomy during this technique.
Heavy liquid metals: Research programs at PSI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeda, Y.
1996-06-01
The author describes work at PSI on thermohydraulics, thermal shock, and material tests for mechnical properties. In the presentation, the focus is on two main programs. (1) SINQ LBE target: The phase II study program for SINQ is planned. A new LBE loop is being constructed. The study has the following three objectives: (a) Pump study - design work on an electromagnetic pump to be integrated into the target. (b) Heat pipe performance test - the use of heat pipes as an additional component of the target cooling system is being considered, and it may be a way to futhermore » decouple the liquid metal and water coolant loops. (c) Mixed convection experiment - in order to find an optimal configuration of the additional flow guide for window cooling, mixed convection around the window is to be studied. The experiment will be started using water and then with LBE. (2) ESS Mercury target: For ESS target study, the following experimental studies are planned, some of which are exampled by trial experiments. (a) Flow around the window: Flow mapping around the hemi-cylindrical window will be made for optimising the flow channels and structures, (b) Geometry optimisation for minimizing a recirculation zone behind the edge of the flow separator, (c) Flow induced vibration and buckling problem for a optimised structure of the flow separator and (d) Gas-liquid two-phase flow will be studied by starting to establish the new experimental method of measuring various kinds of two-phase flow characteristics.« less
Liyanage, H; de Lusignan, S; Liaw, S-T; Kuziemsky, C E; Mold, F; Krause, P; Fleming, D; Jones, S
2014-08-15
Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowdsourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the "internet of things", and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.
Liyanage, H.; Liaw, S-T.; Kuziemsky, C.; Mold, F.; Krause, P.; Fleming, D.; Jones, S.
2014-01-01
Summary Background Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance. PMID:25123718
Statistical Challenges in "Big Data" Human Neuroimaging.
Smith, Stephen M; Nichols, Thomas E
2018-01-17
Smith and Nichols discuss "big data" human neuroimaging studies, with very large subject numbers and amounts of data. These studies provide great opportunities for making new discoveries about the brain but raise many new analytical challenges and interpretational risks. Copyright © 2017 Elsevier Inc. All rights reserved.
On the dynamic rounding-off in analogue and RF optimal circuit sizing
NASA Astrophysics Data System (ADS)
Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena
2014-04-01
Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Optimisation of the Management of Higher Activity Waste in the UK - 13537
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Ciara; Buckley, Matthew
2013-07-01
The Upstream Optioneering project was created in the Nuclear Decommissioning Authority (UK) to support the development and implementation of significant opportunities to optimise activities across all the phases of the Higher Activity Waste management life cycle (i.e. retrieval, characterisation, conditioning, packaging, storage, transport and disposal). The objective of the Upstream Optioneering project is to work in conjunction with other functions within NDA and the waste producers to identify and deliver solutions to optimise the management of higher activity waste. Historically, optimisation may have occurred on aspects of the waste life cycle (considered here to include retrieval, conditioning, treatment, packaging, interimmore » storage, transport to final end state, which may be geological disposal). By considering the waste life cycle as a whole, critical analysis of assumed constraints may lead to cost savings for the UK Tax Payer. For example, it may be possible to challenge the requirements for packaging wastes for disposal to deliver an optimised waste life cycle. It is likely that the challenges faced in the UK are shared in other countries. It is therefore likely that the opportunities identified may also apply elsewhere, with the potential for sharing information to enable value to be shared. (authors)« less
Optimisation of active suspension control inputs for improved performance of active safety systems
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2018-01-01
A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.
Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA
NASA Astrophysics Data System (ADS)
Chandra, Abhijit; Chattopadhyay, Sudipta
2015-01-01
In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.
McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin
2007-05-09
A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Scientific Approach for Optimising Performance, Health and Safety in High-Altitude Observatories
NASA Astrophysics Data System (ADS)
Böcker, Michael; Vogy, Joachim; Nolle-Gösser, Tanja
2008-09-01
The ESO coordinated study “Optimising Performance, Health and Safety in High-Altitude Observatories” is based on a psychological approach using a questionnaire for data collection and assessment of high-altitude effects. During 2007 and 2008, data from 28 staff and visitors involved in APEX and ALMA were collected and analysed and the first results of the study are summarised. While there is a lot of information about biomedical changes at high altitude, relatively few studies have focussed on psychological changes, for example with respect to performance of mental tasks, safety consciousness and emotions. Both, biomedical and psychological changes are relevant factors in occupational safety and health. The results of the questionnaire on safety, health and performance issues demonstrate that the working conditions at high altitude are less detrimental than expected.
Sheikh Rashid, Marya; Leensen, Monique C J; de Laat, Jan A P M; Dreschler, Wouter A
2017-11-01
The "Occupational Earcheck" (OEC) is a Dutch online self-screening speech-in-noise test developed for the detection of occupational high-frequency hearing loss (HFHL). This study evaluates an optimised version of the test and determines the most appropriate masking noise. The original OEC was improved by homogenisation of the speech material, and shortening the test. A laboratory-based cross-sectional study was performed in which the optimised OEC in five alternative masking noise conditions was evaluated. The study was conducted on 18 normal-hearing (NH) adults, and 15 middle-aged listeners with HFHL. The OEC in a low-pass (LP) filtered stationary background noise (test version LP 3: with a cut-off frequency of 1.6 kHz, and a noise floor of -12 dB) was the most accurate version tested. The test showed a reasonable sensitivity (93%), and specificity (94%) and test reliability (intra-class correlation coefficient: 0.84, mean within-subject standard deviation: 1.5 dB SNR, slope of psychometric function: 13.1%/dB SNR). The improved OEC, with homogenous word material in a LP filtered noise, appears to be suitable for the discrimination between younger NH listeners and older listeners with HFHL. The appropriateness of the OEC for screening purposes in an occupational setting will be studied further.
Big Opportunities and Big Concerns of Big Data in Education
ERIC Educational Resources Information Center
Wang, Yinying
2016-01-01
Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…
Evolving aerodynamic airfoils for wind turbines through a genetic algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. J.; Gómez, E.; Grageda, J. I.; Couder, C.; Solís, A.; Hanotel, C. L.; Ledesma, JI
2017-01-01
Nowadays, genetic algorithms stand out for airfoil optimisation, due to the virtues of mutation and crossing-over techniques. In this work we propose a genetic algorithm with arithmetic crossover rules. The optimisation criteria are taken to be the maximisation of both aerodynamic efficiency and lift coefficient, while minimising drag coefficient. Such algorithm shows greatly improvements in computational costs, as well as a high performance by obtaining optimised airfoils for Mexico City's specific wind conditions from generic wind turbines designed for higher Reynolds numbers, in few iterations.
Exemples d’utilisation des techniques d’optimisation en calcul de structures de reacteurs
2003-03-01
34~ optimisation g~om~trique (architecture fig~e) A la difference du secteur automobile et des avionneurs, la plupart des composants des r~acteurs n...utilise des lois de comportement mat~riaux non lin~aires ainsi que des hypotheses de grands d~placements. Ltude d’optimisation consiste ý minimiser...un disque simple et d~cid6 de s~lectionner trois param~tes qui influent sur la rupture : 1paisseur de la toile du disque ElI, la hauteur L3 et la
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting
NASA Astrophysics Data System (ADS)
Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy
2018-06-01
Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.
Nurse strategies for optimising patient participation in nursing care.
Sahlsten, Monika J M; Larsson, Inga E; Sjöström, Björn; Plos, Kaety A E
2009-09-01
THE STUDY'S RATIONALE: Patient participation is an essential factor in nursing care and medical treatment and a legal right in many countries. Despite this, patients have experienced insufficient participation, inattention and neglect regarding their problems and may respond with dependence, passivity or taciturnity. Accordingly, nurses strategies for optimising patient participation in nursing care is an important question for the nursing profession. The aim was to explore Registered Nurses' strategies to stimulate and optimise patient participation in nursing care. The objective was to identify ward nurses' supporting practices. A qualitative research approach was applied. Three focus groups with experienced Registered Nurses providing inpatient somatic care (n = 16) were carried out. These nurses were recruited from three hospitals in West Sweden. The data were analysed using content analysis technique. The ethics of scientific work was adhered to. According to national Swedish legislation, no formal permit from an ethics committee was required. The participants gave informed consent after verbal and written information. Nurse strategies for optimising patient participation in nursing care were identified as three categories: 'Building close co-operation', 'Getting to know the person' and 'Reinforcing self-care capacity' and their 10 subcategories. The strategies point to a process of emancipation of the patient's potential by finding his/her own inherent knowledge, values, motivation and goals and linking these to actions. Nurses need to strive for guiding the patient towards attaining meaningful experiences, discoveries, learning and development. The strategies are important and useful to balance the asymmetry in the nurse-patient relationship in daily nursing practice and also in quality assurance to evaluate and improve patient participation and in education. However, further verification of the findings is recommended by means of replication or other studies in different clinical settings. © 2009 The Authors. Journal compilation © 2009 Nordic College of Caring Science.
76 FR 17341 - Idaho Roadless Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-29
... comment and/or met with the Shoshone-Paiute Tribes of Duck Valley, the Shoshone-Bannock Tribes of Fort...) included an eligibility study for Big Creek. The Agency's Record of Decision found Big Creek in-eligible... suitability study for the Secesh River, including Lake Creek. The Record of Decision found the Secesh River...
Hardiness and the Big Five Personality Traits among Chinese University Students
ERIC Educational Resources Information Center
Zhang, Li-fang
2011-01-01
This study examines the construct of hardiness with the Big Five personality traits among 362 Chinese university students. Participants in the study responded to the Dispositional Hardiness Scale (Bartone, Ursano, Wright, & Ingraham, 1989) and the Revised NEO Personality Inventory (Costa & McCrae, 1992). Results indicate that personality…
Adolescent Psychopathy and the Big Five: Results from Two Samples
ERIC Educational Resources Information Center
Lynam, Donald R.; Caspi, Avshalom; Moffitt, Terrie E.; Raine, Adrian; Loeber, Rolf; Stouthamer-Loeber, Magda
2005-01-01
The present study examines the relation between psychopathy and the Big Five dimensions of personality in two samples of adolescents. Specifically, the study tests the hypothesis that the aspect of psychopathy representing selfishness, callousness, and interpersonal manipulation (Factor 1) is most strongly associated with low Agreeableness,…
Improving sustainable seed yield in Wyoming big sagebrush
Jeremiah C. Armstrong
2007-01-01
As part of the Great Basin Restoration Initiative, the effects of browsing, competition removal, pruning, fertilization and seed collection methods on increasing seed production in Wyoming big sagebrush (Artemisia tridentata Nutt. spp wyomingensis Beetle & Young) were studied. Study sites were located in Idaho, Nevada, and Utah. A split-plot...
The big data potential of epidemiological studies for criminology and forensics.
DeLisi, Matt
2018-07-01
Big data, the analysis of original datasets with large samples ranging from ∼30,000 to one million participants to mine unexplored data, has been under-utilized in criminology. However, there have been recent calls for greater synthesis between epidemiology and criminology and a small number of scholars have utilized epidemiological studies that were designed to measure alcohol and substance use to harvest behavioral and psychiatric measures that relate to the study of crime. These studies have been helpful in producing knowledge about the most serious, violent, and chronic offenders, but applications to more pathological forensic populations is lagging. Unfortunately, big data relating to crime and justice are restricted and limited to criminal justice purposes and not easily available to the research community. Thus, the study of criminal and forensic populations is limited in terms of data volume, velocity, and variety. Additional forays into epidemiology, increased use of available online judicial and correctional data, and unknown new frontiers are needed to bring criminology up to speed in the big data arena. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Optimisation of logistics processes of energy grass collection
NASA Astrophysics Data System (ADS)
Bányai, Tamás.
2010-05-01
The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu
-Omic and Electronic Health Records Big Data Analytics for Precision Medicine
Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D.; Venugopalan, Janani; Hoffman, Ryan; Wang, May D.
2017-01-01
Objective Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of health care. Methods In this article, we present -omic and EHR data characteristics, associated challenges, and data analytics including data pre-processing, mining, and modeling. Results To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Conclusion Big data analytics is able to address –omic and EHR data challenges for paradigm shift towards precision medicine. Significance Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact. PMID:27740470
On Study of Application of Big Data and Cloud Computing Technology in Smart Campus
NASA Astrophysics Data System (ADS)
Tang, Zijiao
2017-12-01
We live in an era of network and information, which means we produce and face a lot of data every day, however it is not easy for database in the traditional meaning to better store, process and analyze the mass data, therefore the big data was born at the right moment. Meanwhile, the development and operation of big data rest with cloud computing which provides sufficient space and resources available to process and analyze data of big data technology. Nowadays, the proposal of smart campus construction aims at improving the process of building information in colleges and universities, therefore it is necessary to consider combining big data technology and cloud computing technology into construction of smart campus to make campus database system and campus management system mutually combined rather than isolated, and to serve smart campus construction through integrating, storing, processing and analyzing mass data.
HARNESSING BIG DATA FOR PRECISION MEDICINE: INFRASTRUCTURES AND APPLICATIONS.
Yu, Kun-Hsing; Hart, Steven N; Goldfeder, Rachel; Zhang, Qiangfeng Cliff; Parker, Stephen C J; Snyder, Michael
2017-01-01
Precision medicine is a health management approach that accounts for individual differences in genetic backgrounds and environmental exposures. With the recent advancements in high-throughput omics profiling technologies, collections of large study cohorts, and the developments of data mining algorithms, big data in biomedicine is expected to provide novel insights into health and disease states, which can be translated into personalized disease prevention and treatment plans. However, petabytes of biomedical data generated by multiple measurement modalities poses a significant challenge for data analysis, integration, storage, and result interpretation. In addition, patient privacy preservation, coordination between participating medical centers and data analysis working groups, as well as discrepancies in data sharing policies remain important topics of discussion. In this workshop, we invite experts in omics integration, biobank research, and data management to share their perspectives on leveraging big data to enable precision medicine.Workshop website: http://tinyurl.com/PSB17BigData; HashTag: #PSB17BigData.
NASA Astrophysics Data System (ADS)
Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.
2017-12-01
Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.
The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.
Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E
2017-05-18
Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.
Huang, Ying; Zhang, Yi; Youtie, Jan; Porter, Alan L.; Wang, Xuefeng
2016-01-01
How do funding agencies ramp-up their capabilities to support research in a rapidly emerging area? This paper addresses this question through a comparison of research proposals awarded by the US National Science Foundation (NSF) and the National Natural Science Foundation of China (NSFC) in the field of Big Data. Big data is characterized by its size and difficulties in capturing, curating, managing and processing it in reasonable periods of time. Although Big Data has its legacy in longstanding information technology research, the field grew very rapidly over a short period. We find that the extent of interdisciplinarity is a key aspect in how these funding agencies address the rise of Big Data. Our results show that both agencies have been able to marshal funding to support Big Data research in multiple areas, but the NSF relies to a greater extent on multi-program funding from different fields. We discuss how these interdisciplinary approaches reflect the research hot-spots and innovation pathways in these two countries. PMID:27219466
Unifying the aspects of the Big Five, the interpersonal circumplex, and trait affiliation.
DeYoung, Colin G; Weisberg, Yanna J; Quilty, Lena C; Peterson, Jordan B
2013-10-01
Two dimensions of the Big Five, Extraversion and Agreeableness, are strongly related to interpersonal behavior. Factor analysis has indicated that each of the Big Five contains two separable but related aspects. The present study examined the manner in which the aspects of Extraversion (Assertiveness and Enthusiasm) and Agreeableness (Compassion and Politeness) relate to interpersonal behavior and trait affiliation, with the hypothesis that these four aspects have a structure corresponding to the octants of the interpersonal circumplex. A second hypothesis was that measures of trait affiliation would fall between Enthusiasm and Compassion in the IPC. These hypotheses were tested in three demographically different samples (N = 469; 294; 409) using both behavioral frequency and trait measures of the interpersonal circumplex, in conjunction with the Big Five Aspect Scales (BFAS) and measures of trait affiliation. Both hypotheses were strongly supported. These findings provide a more thorough and precise mapping of the interpersonal traits within the Big Five and support the integration of the Big Five with models of interpersonal behavior and trait affiliation. © 2012 Wiley Periodicals, Inc.
Huang, Ying; Zhang, Yi; Youtie, Jan; Porter, Alan L; Wang, Xuefeng
2016-01-01
How do funding agencies ramp-up their capabilities to support research in a rapidly emerging area? This paper addresses this question through a comparison of research proposals awarded by the US National Science Foundation (NSF) and the National Natural Science Foundation of China (NSFC) in the field of Big Data. Big data is characterized by its size and difficulties in capturing, curating, managing and processing it in reasonable periods of time. Although Big Data has its legacy in longstanding information technology research, the field grew very rapidly over a short period. We find that the extent of interdisciplinarity is a key aspect in how these funding agencies address the rise of Big Data. Our results show that both agencies have been able to marshal funding to support Big Data research in multiple areas, but the NSF relies to a greater extent on multi-program funding from different fields. We discuss how these interdisciplinary approaches reflect the research hot-spots and innovation pathways in these two countries.
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
A shrinking hypersphere PSO for engineering optimisation problems
NASA Astrophysics Data System (ADS)
Yadav, Anupam; Deep, Kusum
2016-03-01
Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
NASA Astrophysics Data System (ADS)
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-23
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
NASA Astrophysics Data System (ADS)
Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.
2018-02-01
The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-01-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Achieving optimal SERS through enhanced experimental design.
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston
2016-01-01
One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
Gigliarelli, Giulia; Pagiotti, Rita; Persia, Diana; Marcotullio, Maria Carla
2017-01-01
Studies were made to increase the yield of piperine extraction using Naviglio Extractor® solid-liquid dynamic extractor (SLDE) from fruits of Piper longum. The effects of ratio w/v were investigated and optimised for the best method. The maximum yield of piperine (317.7 mg/g) from P. longum fruits was obtained in SLDE 1:50 ethanol extract. Extraction yields of piperine obtained from Soxhlet extraction, decotion (International Organization for Standardization) and conventional maceration extraction methods were found to be 233.7, 231.8 and 143.6 mg/g, respectively. The results of the present study indicated that Naviglio Extractor® is an effective technique for the extraction of piperine from long pepper.
Solution structure of leptospiral LigA4 Big domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Song; Zhang, Jiahai; Zhang, Xuecheng
Pathogenic Leptospiraspecies express immunoglobulin-like proteins which serve as adhesins to bind to the extracellular matrices of host cells. Leptospiral immunoglobulin-like protein A (LigA), a surface exposed protein containing tandem repeats of bacterial immunoglobulin-like (Big) domains, has been proved to be involved in the interaction of pathogenic Leptospira with mammalian host. In this study, the solution structure of the fourth Big domain of LigA (LigA4 Big domain) from Leptospira interrogans was solved by nuclear magnetic resonance (NMR). The structure of LigA4 Big domain displays a similar bacterial immunoglobulin-like fold compared with other Big domains, implying some common structural aspects of Bigmore » domain family. On the other hand, it displays some structural characteristics significantly different from classic Ig-like domain. Furthermore, Stains-all assay and NMR chemical shift perturbation revealed the Ca{sup 2+} binding property of LigA4 Big domain. - Highlights: • Determining the solution structure of a bacterial immunoglobulin-like domain from a surface protein of Leptospira. • The solution structure shows some structural characteristics significantly different from the classic Ig-like domains. • A potential Ca{sup 2+}-binding site was identified by strains-all and NMR chemical shift perturbation.« less
Pupils' Self-Perceptions: The Role of Teachers' Judgment Controlling for Big-Fish-Little-Pond Effect
ERIC Educational Resources Information Center
Bressoux, Pascal; Pansu, Pascal
2016-01-01
This article aims to study the relationship between teachers' judgment and pupils' self-perceptions controlling for the big-fish-little-pond effect (BFLPE). Three studies were conducted among third-grade pupils. Study 1 (n = 585) focused on pupils' perceptions of their scholastic competence. Teachers' judgment and BFLPE were found to have an…
Nutton, Jennifer; Fast, Elizabeth
2015-01-01
Indigenous peoples the world over have and continue to experience the devastating effects of colonialism including loss of life, land, language, culture, and identity. Indigenous peoples suffer disproportionately across many health risk factors including an increased risk of substance use. We use the term "Big Event" to describe the historical trauma attributed to colonial policies as a potential pathway to explain the disparity in rates of substance use among many Indigenous populations. We present "Big Solutions" that have the potential to buffer the negative effects of the Big Event, including: (1) decolonizing strategies, (2) identity development, and (3) culturally adapted interventions. Study limitations are noted and future needed research is suggested.
Identification of genetic loci shared between schizophrenia and the Big Five personality traits.
Smeland, Olav B; Wang, Yunpeng; Lo, Min-Tzu; Li, Wen; Frei, Oleksandr; Witoelar, Aree; Tesli, Martin; Hinds, David A; Tung, Joyce Y; Djurovic, Srdjan; Chen, Chi-Hua; Dale, Anders M; Andreassen, Ole A
2017-05-22
Schizophrenia is associated with differences in personality traits, and recent studies suggest that personality traits and schizophrenia share a genetic basis. Here we aimed to identify specific genetic loci shared between schizophrenia and the Big Five personality traits using a Bayesian statistical framework. Using summary statistics from genome-wide association studies (GWAS) on personality traits in the 23andMe cohort (n = 59,225) and schizophrenia in the Psychiatric Genomics Consortium cohort (n = 82,315), we evaluated overlap in common genetic variants. The Big Five personality traits neuroticism, extraversion, openness, agreeableness and conscientiousness were measured using a web implementation of the Big Five Inventory. Applying the conditional false discovery rate approach, we increased discovery of genetic loci and identified two loci shared between neuroticism and schizophrenia and six loci shared between openness and schizophrenia. The study provides new insights into the relationship between personality traits and schizophrenia by highlighting genetic loci involved in their common genetic etiology.
Borenstein, Meredith S.; Golet, Francis C.; Armstrong, David S.; Breault, Robert F.; McCobb, Timothy D.; Weiskel, Peter K.
2012-01-01
The Rhode Island Water Resources Board planned to develop public water-supply wells in the Big River Management Area in Kent County, Rhode Island. Research in the United States and abroad indicates that groundwater withdrawal has the potential to affect wetland hydrology and related processes. In May 2008, the Rhode Island Water Resources Board, the U.S. Geological Survey, and the University of Rhode Island formed a partnership to establish baseline conditions at selected Big River wetland study sites and to develop an approach for monitoring potential impacts once pumping begins. In 2008 and 2009, baseline data were collected on the hydrology, vegetation, and soil characteristics at five forested wetland study sites in the Big River Management Area. Four of the sites were located in areas of potential drawdown associated with the projected withdrawals. The fifth site was located outside the area of projected drawdown and served as a control site. The data collected during this study are presented in this report.
Pediatric trauma BIG score: Predicting mortality in polytraumatized pediatric patients.
El-Gamasy, Mohamed Abd El-Aziz; Elezz, Ahmed Abd El Basset Abo; Basuni, Ahmed Sobhy Mohamed; Elrazek, Mohamed El Sayed Ali Abd
2016-11-01
Trauma is a worldwide health problem and the major cause of death and disability, particularly affecting the young population. It is important to remember that pediatric trauma care has made a significant improvement in the outcomes of these injured children. This study aimed at evaluation of pediatric trauma BIG score in comparison with New Injury Severity Score (NISS) and Pediatric Trauma Score (PTS) in Tanta University Emergency Hospital. The study was conducted in Tanta University Emergency Hospital to all multiple trauma pediatric patients attended to the Emergency Department for 1 year. Pediatric trauma BIG score, PTS, and NISS scores were calculated and results compared to each other and to observed mortality. BIG score ≥12.7 has sensitivity 86.7% and specificity 71.4%, whereas PTS at value ≤3.5 has sensitivity 63.3% and specificity 68.6% and NISS at value ≥39.5 has sensitivity 53.3% and specificity 54.3%. There was a significant positive correlation between BIG score value and mortality rate. The pediatric BIG score is a reliable mortality-prediction score for children with traumatic injuries; it uses international normalization ratio (INR), Base Excess (BE), and Glasgow Coma Scale (GCS) values that can be measured within a few minutes of sampling, so it can be readily applied in the Pediatric Emergency Department, but it cannot be applied on patients with chronic diseases that affect INR, BE, or GCS.
Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J
2015-01-01
Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284
Coil optimisation for transcranial magnetic stimulation in realistic head geometry.
Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J
Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool
NASA Astrophysics Data System (ADS)
Helle, K. B.; Müller, T. O.; Astrup, P.; Dyve, J. E.
2014-05-01
Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64 source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results. The DOT runs on a server and can be accessed via common web browsers; it can also be installed locally.
Effect of planecta and ROSE™ on the frequency characteristics of blood pressure-transducer kits.
Fujiwara, Shigeki; Kawakubo, Yoshifumi; Mori, Satoshi; Tachihara, Keiichi; Toyoguchi, Izumi; Yokoyama, Takeshi
2015-12-01
Pressure-transducer kits have frequency characteristics such as natural frequency and damping coefficient, which affect the monitoring accuracy. The aim of the present study was to investigate the effect of planecta ports and a damping device (ROSE™, Argon Medical Devices, TX, USA) on the frequency characteristics of pressure-transducer kits. The FloTrac sensor kit (Edwards Lifesciences, CA, USA) and the DTXplus transducer kit (Argon Medical Devices) were prepared with planecta ports, and their frequency characteristics were tested with or without ROSE™. The natural frequency and damping coefficient of each kit were obtained using frequency characteristics analysis software and evaluated by plotting them on the Gardner's chart. By inserting a planecta port, the natural frequency markedly decreased in both the FloTrac sensor kit (from 40 to 22 Hz) and the DTXplus transducer kit (from 35 to 22 Hz). In both kits with one planecta port, the damping coefficient markedly increased by insertion of ROSE™ from 0.2 to 0.5, optimising frequency characteristics. In both kits with two planecta ports, however, the natural frequency decreased from 22 to 12 Hz. The damping coefficient increased from 0.2 to 0.8 by insertion of ROSE™; however, optimisation was not achieved even by ROSE™ insertion. Planecta ports decrease the natural frequency of the kit. ROSE™ is useful to optimise the frequency characteristics in the kits without or with one planecta port. However, optimisation is difficult with two or more planecta ports, even with the ROSE™ device.
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
Statistical optimisation techniques in fatigue signal editing problem
NASA Astrophysics Data System (ADS)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.
2015-02-01
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.
O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.
2012-01-01
Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279
Statistical optimisation techniques in fatigue signal editing problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less
Furnham, Adrian; Crump, John
2014-08-01
This study aimed to examine a Big Five 'bright-side' analysis of a sub-clinical personality disorder, i.e. narcissism. A total of 6957 British adults completed the NEO-PI-R, which measures the Big Five Personality factors at the domain and the facet level, as well as the Hogan Development Survey (HDS), which has a measure of Narcissism called Bold as one of its dysfunctional interpersonal tendencies. Correlation and regression results confirmed many of the associations between the Big Five domains and facets (NEO-PI-R) and sub-clinical narcissism. The Bold (Narcissism) scale from the HDS was the criterion variable in all analyses. Bold individuals are disagreeable extraverts with very low scores on facet Modesty but moderately high scores on Assertiveness, Competence and Achievement Striving. The study confirmed work using different population groups and different measures. Copyright © 2014 John Wiley & Sons, Ltd.
Xu, Le; Liu, Ru-De; Ding, Yi; Mou, Xiaohong; Wang, Jia; Liu, Ying
2017-01-01
Previous findings showed the associations between each of the Big Five personality trait and adolescents’ life satisfaction were different. Some traits (extraversion and neuroticism) correlated with adolescents’ life satisfaction, while other traits did not have the same associations with adolescents’ life satisfaction. In order to explain why the Big Five traits differed in their associations with adolescents’ life satisfaction, the present study verified the relations between each of the Big Five personality traits and life satisfaction, and demonstrated the mediating effects of coping style on the relations between these personality traits and life satisfaction in a sample of 2,357 Chinese adolescents. The results demonstrated that four of the Big Five personality traits (extraversion, agreeableness, conscientiousness, and neuroticism) had significant associations with life satisfaction. Further, coping style partially mediated the relations between these four traits and life satisfaction, whereas coping style fully mediated the relation between openness to new experience and life satisfaction. The results implied a plausible explanation for why the Big Five traits differed in their associations with life satisfaction found among the previous literature: that there might be some partial or full mediation variables (such as coping style in this study) left unexamined. Theoretical and practical implications of this study on further research and educational practice are discussed. PMID:28706496
Big Data: Implications for Health System Pharmacy
Stokes, Laura B.; Rogers, Joseph W.; Hertig, John B.; Weber, Robert J.
2016-01-01
Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services. PMID:27559194
Big Data: Implications for Health System Pharmacy.
Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J
2016-07-01
Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
NASA Astrophysics Data System (ADS)
du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.
2016-12-01
The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario studied, a range of array formations is found expressing varying preferences for either functional. Further analyses then allow for the identification of trade-offs between the two key societal objectives of energy production and conservation. This in turn produces information valuable to stakeholders and policymakers when making decisions on array design.
Electron screening and its effects on big-bang nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Biao; Bertulani, C. A.; Balantekin, A. B.
We study the effects of electron screening on nuclear reaction rates occurring during the big-bang nucleosynthesis epoch. The sensitivity of the predicted elemental abundances on electron screening is studied in detail. It is shown that electron screening does not produce noticeable results in the abundances unless the traditional Debye-Hueckel model for the treatment of electron screening in stellar environments is enhanced by several orders of magnitude. This work rules out electron screening as a relevant ingredient to big-bang nucleosynthesis, confirming a previous study [see Itoh et al., Astrophys. J. 488, 507 (1997)] and ruling out exotic possibilities for the treatmentmore » of screening beyond the mean-field theoretical approach.« less
Kuzmin, Alexander; Madjid, Nather; Terenius, Lars; Ogren, Sven Ove; Bakalkin, Georgy
2006-09-01
Effects of big dynorphin (Big Dyn), a prodynorphin-derived peptide consisting of dynorphin A (Dyn A) and dynorphin B (Dyn B) on memory function, anxiety, and locomotor activity were studied in mice and compared to those of Dyn A and Dyn B. All peptides administered i.c.v. increased step-through latency in the passive avoidance test with the maximum effective doses of 2.5, 0.005, and 0.7 nmol/animal, respectively. Effects of Big Dyn were inhibited by MK 801 (0.1 mg/kg), an NMDA ion-channel blocker whereas those of dynorphins A and B were blocked by the kappa-opioid antagonist nor-binaltorphimine (6 mg/kg). Big Dyn (2.5 nmol) enhanced locomotor activity in the open field test and induced anxiolytic-like behavior both effects blocked by MK 801. No changes in locomotor activity and no signs of anxiolytic-like behavior were produced by dynorphins A and B. Big Dyn (2.5 nmol) increased time spent in the open branches of the elevated plus maze apparatus with no changes in general locomotion. Whereas dynorphins A and B (i.c.v., 0.05 and 7 nmol/animal, respectively) produced analgesia in the hot-plate test Big Dyn did not. Thus, Big Dyn differs from its fragments dynorphins A and B in its unique pattern of memory enhancing, locomotor- and anxiolytic-like effects that are sensitive to the NMDA receptor blockade. The findings suggest that Big Dyn has its own function in the brain different from those of the prodynorphin-derived peptides acting through kappa-opioid receptors.
Lança, L; Silva, A; Alves, E; Serranheira, F; Correia, M
2008-01-01
Typical distribution of exposure parameters in plain radiography is unknown in Portugal. This study aims to identify exposure parameters that are being used in plain radiography in the Lisbon area and to compare the collected data with European references [Commission of European Communities (CEC) guidelines]. The results show that in four examinations (skull, chest, lumbar spine and pelvis), there is a strong tendency of using exposure times above the European recommendation. The X-ray tube potential values (in kV) are below the recommended values from CEC guidelines. This study shows that at a local level (Lisbon region), radiographic practice does not comply with CEC guidelines concerning exposure techniques. Further national/local studies are recommended with the objective to improve exposure optimisation and technical procedures in plain radiography. This study also suggests the need to establish national/local diagnostic reference levels and to proceed to effective measurements for exposure optimisation.
Song, Yang; Shi, Meng
2017-01-01
Empathy promotes positive physician-patient communication and is associated with improved patient satisfaction, treatment adherence and clinical outcomes. It has been suggested that personality traits should be taken into consideration in programs designed to enhance empathy in medical education due to the association found between personality and empathy among medical students. However, the associations between empathy and big five personality traits in medical education are still underrepresented in the existing literature and relevant studies have not been conducted among medical students in China, where tensions in the physician-patient relationship have been reported as outstanding problems in the context of China's current medical reform. Thus, the main objective of this study was to examine the associations between empathy and big five personality traits among Chinese medical students. A cross-sectional study was conducted in a medical university in Northeast China in June 2016. Self-reported questionnaires including the Interpersonal Reactivity Index (IRI) and Big Five Inventory (BFI) and demographic characteristics were distributed. A total of 530 clinical medical students became our final subjects. Hierarchical regression analysis was performed to explore the effects of big five personality traits on empathy. Results of this study showed that big five personality traits accounted for 19.4%, 18.1%, 30.2% of the variance in three dimensions of empathy, namely, perspective taking, empathic concern and personal distress, respectively. Specifically, agreeableness had a strong positive association with empathic concern (β = 0.477, P<0.01), and a moderate association with perspective taking (β = 0.349, P<0.01). Neuroticism was strongly associated with personal distress (β = 0.526, P<0.01) and modestly associated with perspective taking (β = 0.149, P<0.01). Openness to experience had modest associations with perspective taking (β = 0.150, P<0.01) and personal distress (β = -0.160, P<0.01). Conscientiousness had a modest association with perspective taking (β = 0.173, P<0.01). This study revealed that big five personality traits were important predictors of self-reported measures of both cognitive and affective empathy among Chinese medical students. Therefore, individualized intervention strategies based on personality traits could be integrated into programs to enhance empathy in medical education.
Quantum chemical calculations of Cr2O3/SnO2 using density functional theory method
NASA Astrophysics Data System (ADS)
Jawaher, K. Rackesh; Indirajith, R.; Krishnan, S.; Robert, R.; Das, S. Jerome
2018-03-01
Quantum chemical calculations have been employed to study the molecular effects produced by Cr2O3/SnO2 optimised structure. The theoretical parameters of the transparent conducting metal oxides were calculated using DFT / B3LYP / LANL2DZ method. The optimised bond parameters such as bond lengths, bond angles and dihedral angles were calculated using the same theory. The non-linear optical property of the title compound was calculated using first-order hyperpolarisability calculation. The calculated HOMO-LUMO analysis explains the charge transfer interaction between the molecule. In addition, MEP and Mulliken atomic charges were also calculated and analysed.
Zhao, Yihong; Castellanos, F Xavier
2016-03-01
Psychiatric science remains descriptive, with a categorical nosology intended to enhance interobserver reliability. Increased awareness of the mismatch between categorical classifications and the complexity of biological systems drives the search for novel frameworks including discovery science in Big Data. In this review, we provide an overview of incipient approaches, primarily focused on classically categorical diagnoses such as schizophrenia (SZ), autism spectrum disorder (ASD), and attention-deficit/hyperactivity disorder (ADHD), but also reference convincing, if focal, advances in cancer biology, to describe the challenges of Big Data and discovery science, and outline approaches being formulated to overcome existing obstacles. A paradigm shift from categorical diagnoses to a domain/structure-based nosology and from linear causal chains to complex causal network models of brain-behavior relationship is ongoing. This (r)evolution involves appreciating the complexity, dimensionality, and heterogeneity of neuropsychiatric data collected from multiple sources ('broad' data) along with data obtained at multiple levels of analysis, ranging from genes to molecules, cells, circuits, and behaviors ('deep' data). Both of these types of Big Data landscapes require the use and development of robust and powerful informatics and statistical approaches. Thus, we describe Big Data analysis pipelines and the promise and potential limitations in using Big Data approaches to study psychiatric disorders. We highlight key resources available for psychopathological studies and call for the application and development of Big Data approaches to dissect the causes and mechanisms of neuropsychiatric disorders and identify corresponding biomarkers for early diagnosis. © 2016 Association for Child and Adolescent Mental Health.
Zhao, Yihong; Castellanos, F. Xavier
2015-01-01
Background and Scope Psychiatric science remains descriptive, with a categorical nosology intended to enhance inter-observer reliability. Increased awareness of the mismatch between categorical classifications and the complexity of biological systems drives the search for novel frameworks including discovery science in Big Data. In this review, we provide an overview of incipient approaches, primarily focused on classically categorical diagnoses such as schizophrenia (SZ), autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD), but also reference convincing, if focal, advances in cancer biology, to describe the challenges of Big Data and discovery science, and outline approaches being formulated to overcome existing obstacles. Findings A paradigm shift from categorical diagnoses to a domain/structure-based nosology and from linear causal chains to complex causal network models of brain-behavior relationship is ongoing. This (r)evolution involves appreciating the complexity, dimensionality and heterogeneity of neuropsychiatric data collected from multiple sources (“broad” data) along with data obtained at multiple levels of analysis, ranging from genes to molecules, cells, circuits and behaviors (“deep” data). Both of these types of Big Data landscapes require the use and development of robust and powerful informatics and statistical approaches. Thus, we describe Big Data analysis pipelines and the promise and potential limitations in using Big Data approaches to study psychiatric disorders. Conclusion We highlight key resources available for psychopathological studies and call for the application and development of Big Data approaches to dissect the causes and mechanisms of neuropsychiatric disorders and identify corresponding biomarkers for early diagnosis. PMID:26732133
NASA Astrophysics Data System (ADS)
Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.
2016-09-01
Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.
Adverse Drug Event Discovery Using Biomedical Literature: A Big Data Neural Network Adventure
Badger, Jonathan; LaRose, Eric; Shirzadi, Ehsan; Mahnke, Andrea; Mayer, John; Ye, Zhan; Page, David; Peissig, Peggy
2017-01-01
Background The study of adverse drug events (ADEs) is a tenured topic in medical literature. In recent years, increasing numbers of scientific articles and health-related social media posts have been generated and shared daily, albeit with very limited use for ADE study and with little known about the content with respect to ADEs. Objective The aim of this study was to develop a big data analytics strategy that mines the content of scientific articles and health-related Web-based social media to detect and identify ADEs. Methods We analyzed the following two data sources: (1) biomedical articles and (2) health-related social media blog posts. We developed an intelligent and scalable text mining solution on big data infrastructures composed of Apache Spark, natural language processing, and machine learning. This was combined with an Elasticsearch No-SQL distributed database to explore and visualize ADEs. Results The accuracy, precision, recall, and area under receiver operating characteristic of the system were 92.7%, 93.6%, 93.0%, and 0.905, respectively, and showed better results in comparison with traditional approaches in the literature. This work not only detected and classified ADE sentences from big data biomedical literature but also scientifically visualized ADE interactions. Conclusions To the best of our knowledge, this work is the first to investigate a big data machine learning strategy for ADE discovery on massive datasets downloaded from PubMed Central and social media. This contribution illustrates possible capacities in big data biomedical text analysis using advanced computational methods with real-time update from new data published on a daily basis. PMID:29222076
Optimisation of flight dynamic control based on many-objectives meta-heuristic: a comparative study
NASA Astrophysics Data System (ADS)
Bureerat, Sujin; Pholdee, Nantiwat; Radpukdee, Thana
2018-05-01
Development of many objective meta-heuristics (MnMHs) is a currently interesting topic as they are suitable to real applications of optimisation problems which usually require many ob-jectives. However, most of MnMHs have been mostly developed and tested based on stand-ard testing functions while the use of MnMHs to real applications is rare. Therefore, in this work, MnMHs are applied for optimisation design of flight dynamic control. The design prob-lem is posed to find control gains for minimising; the control effort, the spiral root, the damp-ing in roll root, sideslip angle deviation, and maximising; the damping ratio of the dutch-roll complex pair, the dutch-roll frequency, bank angle at pre-specified times 1 seconds and 2.8 second subjected to several constraints based on Military Specifications (1969) requirement. Several established many-objective meta-heuristics (MnMHs) are used to solve the problem while their performances are compared. With this research work, performance of several MnMHs for flight control is investigated. The results obtained will be the baseline for future development of flight dynamic and control.
Structure zone diagram and particle incorporation of nickel brush plated composite coatings
Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.
2017-01-01
This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%. PMID:28300159
Ali, Sikander; Nawaz, Wajeeha
2017-02-01
The optimisation of nutritional requirements for dopamine (DA) synthesis by calcium alginate-entrapped mutant variant of Aspergillus oryzae EMS-6 using submerged fermentation technique was investigated. A total of 13 strains were isolated from soil. Isolate I-2 was selected as a better producer of DA and improved by exposing with ethyl methylsulphonate (EMS). EMS-6 was selected as it exhibited 43 μg/mL DA activity. The mutant variable was further treated with low levels of l-cysteine HCl to make it resistant against diversion and environmental stress. The conidiospores of mutant variant were entrapped in calcium alginate beads for stable product formation. EMS-6 gave maximum DA activity (124 μg/mL) when supplemented with 0.1% peptone and 0.2% sucrose, under optimised parameters viz. pH 3, temperature of 55 °C and incubation time of 70 min. The study involves the high profile of DA activity and is needed, as DA is capable to control numerous neurogenic disorders.
Setyaningsih, W; Saputro, I E; Palma, M; Barroso, C G
2015-02-15
A new microwave-assisted extraction (MAE) method has been investigated for the extraction of phenolic compounds from rice grains. The experimental conditions studied included temperature (125-175°C), microwave power (500-1000W), time (5-15min), solvent (10-90% EtOAc in MeOH) and solvent-to-sample ratio (10:1 to 20:1). The extraction variables were optimised by the response surface methodology. Extraction temperature and solvent were found to have a highly significant effect on the response value (p<0.0005) and the extraction time also had a significant effect (p<0.05). The optimised MAE conditions were as follows: extraction temperature 185°C, microwave power 1000W, extraction time 20min, solvent 100% MeOH, and solvent-to-sample ratio 10:1. The developed method had a high precision (in terms of CV: 5.3% for repeatability and 5.5% for intermediate precision). Finally, the new method was applied to real samples in order to investigate the presence of phenolic compounds in a wide variety of rice grains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Structure zone diagram and particle incorporation of nickel brush plated composite coatings
NASA Astrophysics Data System (ADS)
Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.
2017-03-01
This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.
A review on simple assembly line balancing type-e problem
NASA Astrophysics Data System (ADS)
Jusop, M.; Rashid, M. F. F. Ab
2015-12-01
Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.
Kanwal, Jasmeen; Smith, Kenny; Culbertson, Jennifer; Kirby, Simon
2017-08-01
The linguist George Kingsley Zipf made a now classic observation about the relationship between a word's length and its frequency; the more frequent a word is, the shorter it tends to be. He claimed that this "Law of Abbreviation" is a universal structural property of language. The Law of Abbreviation has since been documented in a wide range of human languages, and extended to animal communication systems and even computer programming languages. Zipf hypothesised that this universal design feature arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently-his famous Principle of Least Effort. In this study, we use a miniature artificial language learning paradigm to provide direct experimental evidence for this explanatory hypothesis. We show that language users optimise form-meaning mappings only when pressures for accuracy and efficiency both operate during a communicative task, supporting Zipf's conjecture that the Principle of Least Effort can explain this universal feature of word length distributions. Copyright © 2017 Elsevier B.V. All rights reserved.
Structure zone diagram and particle incorporation of nickel brush plated composite coatings.
Isern, L; Impey, S; Almond, H; Clouser, S J; Endrino, J L
2017-03-16
This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.
Dellson, P; Nilbert, M; Bendahl, P-O; Malmström, P; Carlsson, C
2011-07-01
Clinical trials are crucial to improve cancer treatment but recruitment is difficult. Optimised patient information has been recognised as a key issue. In line with the increasing focus on patients' perspectives in health care, we aimed to study patients' opinions about the written information used in three clinical trials for breast cancer. Primary data collection was done in focus group interviews with breast cancer patient advocates. Content analysis identified three major themes: comprehensibility, emotions and associations, and decision making. Based on the advocates' suggestions for improvements, 21 key issues were defined and validated through a questionnaire in an independent group of breast cancer patient advocates. Clear messages, emotionally neutral expressions, careful descriptions of side effects, clear comparisons between different treatment alternatives and information about the possibility to discontinue treatment were perceived as the most important issues. Patients' views of the information in clinical trials provide new insights and identify key issues to consider in optimising future written information and may improve recruitment to clinical cancer trials. © 2010 Blackwell Publishing Ltd.
Guillaume, Y C; Peyrin, E
2000-03-06
A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.
Big Bubbles in Boiling Liquids: Students' Views
ERIC Educational Resources Information Center
Costu, Bayram
2008-01-01
The aim of this study was to elicit students' conceptions about big bubbles in boiling liquids (water, ethanol and aqueous CuSO[subscript 4] solution). The study is based on twenty-four students at different ages and grades. The clinical interviews technique was conducted to solicit students' conceptions and the interviews were analyzed to…
Psychosocial Development and the Big Five Personality Traits among Chinese University Students
ERIC Educational Resources Information Center
Zhang, Li-fang
2013-01-01
This study explores how psychosocial development and personality traits are related. In particular, the study investigates the predictive power of the successful resolution of the Eriksonian psychosocial crises for the Big Five personality traits beyond age and gender. Four hundred university students in mainland China responded to the Measures of…
Moving Every Child Ahead: The Big6 Success Strategy.
ERIC Educational Resources Information Center
Berkowitz, Bob; Serim, Ferdi
2002-01-01
Explains the Big6 approach to teaching information skills and describes its use in a high school social studies class to improve student test scores, teach them how to learn, and improve the teachers' skills. Highlights include the balance between content and process, formative and summative evaluation, assignment organizers, and study tips. (LRW)
Tunable and Reconfigurable Optical Negative-Index Materials with Low Losses
2012-01-21
to study metric signature transitions and the cosmological “Big Bang”. • A theory for basic nonlinear optical processes in NIMs and in double...h-MMs) can be used to study metric signature transitions and the cosmological “Big Bang”. • A theory for basic nonlinear optical processes in NIMs
Ecology of Virginia big-eared bats in North Carolina and Tennessee.
DOT National Transportation Integrated Search
2016-08-24
The researchers conducted a study of the springtime ecology of an isolated North Carolina-Tennessee population of the Virginia big-eared bat (Corynorhinus townsendii virginianus), a federally endangered species. With limited data on the whereabouts o...
Higton, D M
2001-01-01
An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata
2015-03-15
Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analysis of power gating in different hierarchical levels of 2MB cache, considering variation
NASA Astrophysics Data System (ADS)
Jafari, Mohsen; Imani, Mohsen; Fathipour, Morteza
2015-09-01
This article reintroduces power gating technique in different hierarchical levels of static random-access memory (SRAM) design including cell, row, bank and entire cache memory in 16 nm Fin field effect transistor. Different structures of SRAM cells such as 6T, 8T, 9T and 10T are used in design of 2MB cache memory. The power reduction of the entire cache memory employing cell-level optimisation is 99.7% with the expense of area and other stability overheads. The power saving of the cell-level optimisation is 3× (1.2×) higher than power gating in cache (bank) level due to its superior selectivity. The access delay times are allowed to increase by 4% in the same energy delay product to achieve the best power reduction for each supply voltages and optimisation levels. The results show the row-level power gating is the best for optimising the power of the entire cache with lowest drawbacks. Comparisons of cells show that the cells whose bodies have higher power consumption are the best candidates for power gating technique in row-level optimisation. The technique has the lowest percentage of saving in minimum energy point (MEP) of the design. The power gating also improves the variation of power in all structures by at least 70%.
Big Data Science Education: A Case Study of a Project-Focused Introductory Course
ERIC Educational Resources Information Center
Saltz, Jeffrey; Heckman, Robert
2015-01-01
This paper reports on a case study of a project-focused introduction to big data science course. The pedagogy of the course leveraged boundary theory, where students were positioned to be at the boundary between a client's desire to understand their data and the academic class. The results of the case study demonstrate that using live clients…
ERIC Educational Resources Information Center
Wang, Ze
2015-01-01
Using data from the Trends in International Mathematics and Science Study (TIMSS) 2007, this study examined the big-fish-little-pond-effects (BFLPEs) in 49 countries. In this study, the effect of math ability on math self-concept was decomposed into a within- and a between-level components using implicit mean centring and the complex data…
Serum big endothelin-1 as a clinical marker for cardiopulmonary and neoplastic diseases in dogs.
Fukumoto, Shinya; Hanazono, Kiwamu; Miyasho, Taku; Endo, Yoshifumi; Kadosawa, Tsuyoshi; Iwano, Hidetomo; Uchide, Tsuyoshi
2014-11-24
Many studies of human subjects have demonstrated the utility of assessing serum levels of endothelin-1 (ET-1) and big ET-1 as clinical biomarkers in cardiopulmonary and neoplastic diseases. In this study we explored the feasibility of using serum big ET-1 as a reliable veterinary marker in dogs with various cardiopulmonary and neoplastic diseases. Serum big ET-1 levels were measured by ELISA in dogs with cardiopulmonary (n=21) and neoplastic diseases (n=57). Dogs exhibiting cardiopulmonary disease were divided into two groups based on the velocity of tricuspid valve regurgitation (3.0>m/s) measured by ultrasound: without and with pulmonary hypertension. Big ET-1 levels for the dogs with the diseases were compared with levels in normal healthy dogs (n=17). Dogs with cardiopulmonary disease (4.6±4.6 pmol/l) showed a significantly (P<0.01) higher level of big ET-1 than healthy control dogs (1.1±0.53 pmol/l). Serum levels in the dogs with pulmonary hypertension (6.2±5.3 pmol/l) were significantly (P<0.01) higher than those without pulmonary hypertension (2.0±0.6 pmol/l). Dogs with hemangiosarcoma (5.6±2.2 pmol/l), adenocarcinoma (2.0±1.8 pmol/l), histiocytic sarcoma (3.3±1.9 pmol/l), chondrosarcoma or osteosarcoma (3.0±1.6 pmol/l) and hepatocellular carcinoma (2.7±1.8 pmol/l) showed significantly (P<0.05) higher levels than healthy control dogs. These findings point to the potential of serum big ET-1 as a clinical marker for cardiopulmonary and neoplastic diseases in dogs. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Ebersbach, Georg; Ebersbach, Almut; Gandor, Florin; Wegner, Brigitte; Wissel, Jörg; Kupsch, Andreas
2014-05-01
To determine whether physical activity may affect cognitive performance in patients with Parkinson's disease by measuring reaction times in patients participating in the Berlin BIG study. Randomized controlled trial, rater-blinded. Ambulatory care. Patients with mild to moderate Parkinson's disease (N=60) were randomly allocated to 3 treatment arms. Outcome was measured at the termination of training and at follow-up 16 weeks after baseline in 58 patients (completers). Patients received 16 hours of individual Lee Silverman Voice Treatment-BIG training (BIG; duration of treatment, 4wk), 16 hours of group training with Nordic Walking (WALK; duration of treatment, 8wk), or nonsupervised domestic exercise (HOME; duration of instruction, 1hr). Cued reaction time (cRT) and noncued reaction time (nRT). Differences between treatment groups in improvement in reaction times from baseline to intermediate and baseline to follow-up assessments were observed for cRT but not for nRT. Pairwise t test comparisons revealed differences in change in cRT at both measurements between BIG and HOME groups (intermediate: -52ms; 95% confidence interval [CI], -84/-20; P=.002; follow-up: 55ms; CI, -105/-6; P=.030) and between WALK and HOME groups (intermediate: -61ms; CI, -120/-2; P=.042; follow-up: -78ms; CI, -136/-20; P=.010). There was no difference between BIG and WALK groups (intermediate: 9ms; CI, -49/67; P=.742; follow-up: 23ms; CI, -27/72; P=.361). Supervised physical exercise with Lee Silverman Voice Treatment-BIG or Nordic Walking is associated with improvement in cognitive aspects of movement preparation. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yoon, Seung-Chul; Park, Bosoon; Lawrence, Kurt C.
2017-05-01
Various types of optical imaging techniques measuring light reflectivity and scattering can detect microbial colonies of foodborne pathogens on agar plates. Until recently, these techniques were developed to provide solutions for hypothesis-driven studies, which focused on developing tools and batch/offline machine learning methods with well defined sets of data. These have relatively high accuracy and rapid response time because the tools and methods are often optimized for the collected data. However, they often need to be retrained or recalibrated when new untrained data and/or features are added. A big-data driven technique is more suitable for online learning of new/ambiguous samples and for mining unknown or hidden features. Although big data research in hyperspectral imaging is emerging in remote sensing and many tools and methods have been developed so far in many other applications such as bioinformatics, the tools and methods still need to be evaluated and adjusted in applications where the conventional batch machine learning algorithms were dominant. The primary objective of this study is to evaluate appropriate big data analytic tools and methods for online learning and mining of foodborne pathogens on agar plates. After the tools and methods are successfully identified, they will be applied to rapidly search big color and hyperspectral image data of microbial colonies collected over the past 5 years in house and find the most probable colony or a group of colonies in the collected big data. The meta-data, such as collection time and any unstructured data (e.g. comments), will also be analyzed and presented with output results. The expected results will be novel, big data-driven technology to correctly detect and recognize microbial colonies of various foodborne pathogens on agar plates.
Cheong, Ai M; Tan, Chin P; Nyam, Kar L
2018-01-01
Kenaf ( Hibiscus cannabinus L.) seed oil has been proven for its multi-pharmacological benefits; however, its poor water solubility and stability have limited its industrial applications. This study was aimed to further improve the stability of pre-developed kenaf seed oil-in-water nanoemulsions by using food-grade ternary emulsifiers. The effects of emulsifier concentration (1, 5, 10, 15% w/w), homogenisation pressure (16,000, 22,000, 28,000 psi), and homogenisation cycles (three, four, five cycles) were studied to produce high stability of kenaf seed oil-in-water nanoemulsions using high pressure homogeniser. Generally, results showed that the emulsifier concentration and homogenisation conditions had great effect ( p < 0.05) on the particle sizes, polydispersity index and hence the physical stability of nanoemulsions. Homogenisation parameters at 28,000 psi for three cycles produced the most stable homogeneous nanoemulsions that were below 130 nm, below 0.16, and above -40 mV of particle size, polydispersity index, and zeta potential, respectively. Field emission scanning electron microscopy micrograph showed that the optimised nanoemulsions had a good distribution within nano-range. The optimised nanoemulsions were proved to be physically stable for up to six weeks of storage at room temperature. The results from this study also provided valuable information in producing stable kenaf seed oil nanoemulsions for the future application in food and nutraceutical industries.
Santos Souza, Higo Fernando; Real, Daniel; Leonardi, Darío; Rocha, Sandra Carla; Alonso, Victoria; Serra, Esteban; Silber, Ariel Mariano; Salomon, Claudio Javier
2017-12-01
To develop an alcohol-free solution suitable for children of benznidazole, the drug of choice for treatment of Chagas disease. In a quality-by-design approach, a systematic optimisation procedure was carried out to estimate the values of the factors leading to the maximum drug concentration. The formulations were analysed in terms of chemical and physical stability and drug content. The final preparation was subjected to an in vivo palatability assay. Mice were infected and treated orally in a murine model. The results showed that benznidazole solubility increased up to 18.38 mg/ml in the optimised co-solvent system. The final formulation remained stable at all three temperatures tested, with suitable drug content and no significant variability. Palatability of the preparation was improved by taste masking of BZL. In vivo studies showed that both parasitaemia and mortality diminished, particularly at a dose of 40 mg/kg/day. Quality by design was a suitable approach to formulate a co-solvent system of benznidazole. The in vivo studies confirmed the suitability of the optimised such solutions to diminish both parasitaemia and mortality. Thus, this novel alternative should be taken into account for further clinical evaluation in all age ranges. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.
2017-10-01
ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.
Optimising μCT imaging of the middle and inner cat ear.
Seifert, H; Röher, U; Staszyk, C; Angrisani, N; Dziuba, D; Meyer-Lindenberg, A
2012-04-01
This study's aim was to determine the optimal scan parameters for imaging the middle and inner ear of the cat with micro-computertomography (μCT). Besides, the study set out to assess whether adequate image quality can be obtained to use μCT in diagnostics and research on cat ears. For optimisation, μCT imaging of two cat skull preparations was performed using 36 different scanning protocols. The μCT-scans were evaluated by four experienced experts with regard to the image quality and detail detectability. By compiling a ranking of the results, the best possible scan parameters could be determined. From a third cat's skull, a μCT-scan, using these optimised scan parameters, and a comparative clinical CT-scan were acquired. Afterwards, histological specimens of the ears were produced which were compared to the μCT-images. The comparison shows that the osseous structures are depicted in detail. Although soft tissues cannot be differentiated, the osseous structures serve as valuable spatial orientation of relevant nerves and muscles. Clinical CT can depict many anatomical structures which can also be seen on μCT-images, but these appear a lot less sharp and also less detailed than with μCT. © 2011 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Griggs, Adam J.; Davies, Siwan M.; Abbott, Peter M.; Rasmussen, Tine L.; Palmer, Adrian P.
2014-12-01
Tephrochronology is central to the INTIMATE goals for testing the degree of climatic synchroneity during abrupt climatic events that punctuated the last glacial period. Since their identification in North Atlantic marine sequences, the Faroe Marine Ash Zone II (FMAZ II), FMAZ III and FMAZ IV have received considerable attention due to their potential for high-precision synchronisation with the Greenland ice-cores. In order to optimise the use of these horizons as isochronous markers, a detailed re-investigation of their geochemical composition, sedimentology and the processes that deposited each ash zone is presented. Shard concentration profiles, geochemical homogeneity and micro-sedimentological structures are investigated for each ash zone preserved within core JM11-19PC, retrieved from the southeastern Norwegian Sea on the central North Faroe Slope. This approach allows a thorough assessment of primary ash-fall preservation and secondary depositional features and demonstrates its value for assessing depositional integrity in the marine environment. Results indicate that the FMAZ II and IV are well-resolved primary deposits that can be used as isochrons for high-precision correlation studies. We outline key recommendations for future marine tephra studies and provide a protocol for optimising the application of tephrochronology to meet the INTIMATE synchronisation goals.
Sabater, Carlos; Corzo, Nieves; Olano, Agustín; Montilla, Antonia
2018-06-15
The aim of this study was to optimise pectin extraction from artichoke by-products with Celluclast ® 1.5L using an experimental design analysed by response-surface methodology (RSM). The variables optimised were artichoke by-product powder concentration (2-7%, X 1 ), enzyme dose (2.2-13.3 U g -1 , X 2 ) and extraction time (6-24 h, X 3 ). The variables studied were galacturonic acid (GalA) (R 2 93.9) and pectic neutral sugars (R 2 92.8) content and pectin yield (R 2 88.6). In the optimum extraction conditions (X 1 = 6.5%; X 2 = 10.1 U g -1 ; X 3 = 27.2 h), pectin yield was 176 mgg -1 dry matter (DM). Considering 27.2 h of treatment as the +α value given by the design, the extraction time was increased up to 48 h obtaining a yield of 221 mg g -1 DM. The enzymatic method optimised allows obtaining artichoke pectin with good yield, high GalA (720 mg g -1 DM) and arabinose (127.6mgg -1 DM) contents and degree of methylation of 19.5%. Copyright © 2018 Elsevier Ltd. All rights reserved.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
How Big Are "Martin's Big Words"? Thinking Big about the Future.
ERIC Educational Resources Information Center
Gardner, Traci
"Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…
[Inappropriate ICD therapies: All problems solved with MADIT-RIT?].
Kolb, Christof
2015-06-01
The MADIT-RIT study represents a major trial in implantable cardioverter-defibrillator (ICD) therapy that was recently published. It highlights that different programming strategies (high rate cut-off or delayed therapy versus conventional) reduce inappropriate ICD therapies, leave syncope rates unaltered and can improve patient's survival. The study should motivate cardiologist and electrophysiologists to reconsider their individual programming strategies. However, as the study represents largely patients with ischemic or dilated cardiomyopathy for primary prevention of sudden cardiac death supplied with a dual chamber or cardiac resynchronisation therapy ICD, the results may not easily be transferable to other entities or other device types. Despite the success of the MADIT-RIT study efforts still need to be taken to further optimise device algorithms to avert inappropriate therapies. Optimised ICD therapy also includes the avoidance of unnecessary ICD shocks as well as the treatment of all aspects of the underlying cardiac disease.
Stratigraphic comparison of six oil fields (WV) producing from Big Injun sandstones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, X.; Donaldson, A.C.
1993-08-01
Clustered within western West Virginia, six oil fields produce from the lower Mississippian Big Injun sandstones, and three more oil fields also supplement this production either from underlying Squaw or Weir sandstones. Shales separate these sandstones that occur stratigraphically between the Sunbury Shale (maximum flooding surface) and pre-Greenbrier unconformity (maximum regressive erosional surface), and represent highstand regressive deposits associated with the postorogenic phase of foreland basin accumulation. Stratigraphic studies show two Big Injun sandstones. The upper sandstone, called the Maccrady Big Injun, is separated from the lower Price/Pocono Big Injun sandstone by red shales. Both Big Injun sandstones consist ofmore » fine-grained river-mouth bars capped by coarse-grained river-channel deposits. Although the six fields are within three adjacent counties, Maccrady Big Injun sandstones of Blue Creek (Kanawha) and Rock Creek (Roane) fields are younger and were deposited by a different fluvial-deltaic system than the Price/Pocono Big Injun sandstones of Granny Creek (Clay), Tariff (Roane) Clendenin (Clay), and Pond Fork (Kanawha) fields. Upper Weir sandstones are thick, narrow north-trending belts underlying Pond Fork and Blue Creek fields, with properties suggesting wave-dominated shoreline deposits. Allocycles spanning separate drainage systems indicate eustasy. Postorogenic flexural adjustments probably explain stacked sandstone belts with superposed paleovalleys of overlying unconformities (pre-Greenbrier, Pottsville), particularly where aligned along or parallel basement structures of Rome trough or West Virginia dome. Initially, differential subsidence or uplift during sedimentation influenced the position, geometry, trend, and distribution patterns of these reservoir sandstone, then influenced their preserved condition during erosion of pre-Greenbrier unconformity.« less
Bakken, Suzanne; Reame, Nancy
2016-01-01
Symptom management research is a core area of nursing science and one of the priorities for the National Institute of Nursing Research, which specifically focuses on understanding the biological and behavioral aspects of symptoms such as pain and fatigue, with the goal of developing new knowledge and new strategies for improving patient health and quality of life. The types and volume of data related to the symptom experience, symptom management strategies, and outcomes are increasingly accessible for research. Traditional data streams are now complemented by consumer-generated (i.e., quantified self) and "omic" data streams. Thus, the data available for symptom science can be considered big data. The purposes of this chapter are to (a) briefly summarize the current drivers for the use of big data in research; (b) describe the promise of big data and associated data science methods for advancing symptom management research; (c) explicate the potential perils of big data and data science from the perspective of the ethical principles of autonomy, beneficence, and justice; and (d) illustrate strategies for balancing the promise and the perils of big data through a case study of a community at high risk for health disparities. Big data and associated data science methods offer the promise of multidimensional data sources and new methods to address significant research gaps in symptom management. If nurse scientists wish to apply big data and data science methods to advance symptom management research and promote health equity, they must carefully consider both the promise and perils.
Design and development of a medical big data processing system based on Hadoop.
Yao, Qin; Tian, Yu; Li, Peng-Fei; Tian, Li-Li; Qian, Yang-Ming; Li, Jing-Song
2015-03-01
Secondary use of medical big data is increasingly popular in healthcare services and clinical research. Understanding the logic behind medical big data demonstrates tendencies in hospital information technology and shows great significance for hospital information systems that are designing and expanding services. Big data has four characteristics--Volume, Variety, Velocity and Value (the 4 Vs)--that make traditional systems incapable of processing these data using standalones. Apache Hadoop MapReduce is a promising software framework for developing applications that process vast amounts of data in parallel with large clusters of commodity hardware in a reliable, fault-tolerant manner. With the Hadoop framework and MapReduce application program interface (API), we can more easily develop our own MapReduce applications to run on a Hadoop framework that can scale up from a single node to thousands of machines. This paper investigates a practical case of a Hadoop-based medical big data processing system. We developed this system to intelligently process medical big data and uncover some features of hospital information system user behaviors. This paper studies user behaviors regarding various data produced by different hospital information systems for daily work. In this paper, we also built a five-node Hadoop cluster to execute distributed MapReduce algorithms. Our distributed algorithms show promise in facilitating efficient data processing with medical big data in healthcare services and clinical research compared with single nodes. Additionally, with medical big data analytics, we can design our hospital information systems to be much more intelligent and easier to use by making personalized recommendations.
ClimateSpark: An in-memory distributed computing framework for big climate data analytics
NASA Astrophysics Data System (ADS)
Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei
2018-06-01
The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
F.M. Bennett; S.C. Loeb; W.W. Bowerman
Rafinesque's big-eared bat (Corynorhinus rafinesquii), an insectivorous mammal indigenous to the southern United States, has long been referred to as one of the least known bats in North America. Although there has been a moderate increase in the number of peer-reviewed articles published on this species in the past 6 years, the basic ecology and status of Rafinesque's big-eared bat remains largely obscure. Prior to 1996, when the United States Fish and Wildlife Service (USFWS) discontinued the list of Candidate Species, Rafinesque's big-eared bat was listed as a Federal Category 2 Candidate species. Currently, Rafinesque's big-eared bat is recognized asmore » a ''species of special concern'' across most of its range but receives no legal protection. Nonetheless, the USFWS and numerous state agencies remain concerned about this species. Further biological research and field study are needed to resolve the conservation status of this taxona. In response to the paucity of information regarding the status and distribution of Rafinesque's big-eared bat, statewide survey of highway bridges used as roost sites was conducted.« less
History of gas production from Devonian shale in eastern Kentucky
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kemper, J.R.; Frankie, W.T.; Smath, R.A.
More than 10,500 wells that penetrate the Devonian shale have been compiled into a data base covering a 25-county area of eastern Kentucky. This area includes the Big Sandy gas field, the largest in the Appalachian basin, and marginal areas to the southwest, west, and northwest. The development of the Big Sandy gas field began in the 1920s in western Floyd County, Kentucky, and moved concentrically outward through 1970. Since 1971, the trend has been for infill and marginal drilling, and fewer companies have been involved. The resulting outline of the Big Sandy gas field covers most of Letcher, Knott,more » Floyd, Martin, and Pike Counties in Kentucky; it also extends into West Virginia. Outside the Big Sandy gas field, exploration for gas has been inconsistent, with a much higher ratio of dry holes. The results of this study, which was partially supported by the Gas Research Institute (GRI), indicate that certain geologic factors, such as fracture size and spacing, probably determine the distribution of commercial gas reserves as well as the outline of the Big Sandy gas field. Future Big Sandy infill and extension drilling will need to be based on an understanding of these factors.« less
The frequency of channel-forming discharges in a tributary of Upper Big Walnut Creek, Ohio
USDA-ARS?s Scientific Manuscript database
The goal of this study was to determine the frequency and magnitude of annual out-of-bank discharges in Sugar Creek, a tributary of the Upper Big Walnut Creek, in Ohio. To address this goal: a stream geomorphology study was conducted; measured discharge data at a downstream location were used to dev...
Communicating the Nature of Science through "The Big Bang Theory": Evidence from a Focus Group Study
ERIC Educational Resources Information Center
Li, Rashel; Orthia, Lindy A.
2016-01-01
In this paper, we discuss a little-studied means of communicating about or teaching the nature of science (NOS)--through fiction television. We report some results of focus group research which suggest that the American sitcom "The Big Bang Theory" (2007-present), whose main characters are mostly working scientists, has influenced…
Using Big Data to Predict Student Dropouts: Technology Affordances for Research
ERIC Educational Resources Information Center
Niemi, David; Gitin, Elena
2012-01-01
An underlying theme of this paper is that it can be easier and more efficient to conduct valid and effective research studies in online environments than in traditional classrooms. Taking advantage of the "big data" available in an online university, we conducted a study in which a massive online database was used to predict student…
ERIC Educational Resources Information Center
Chan, Christian S.; Rhodes, Jean E.; Howard, Waylon J.; Lowe, Sarah R.; Schwartz, Sarah E. O.; Herrera, Carla
2013-01-01
This study explores the pathways through which school-based mentoring relationships are associated with improvements in elementary and high school students' socio-emotional, academic, and behavioral outcomes. Participants in the study (N = 526) were part of a national evaluation of the Big Brothers Big Sisters school-based mentoring programs, all…
Regional analysis of big five personality factors and suicide rates in Russia.
Voracek, Martin
2013-08-01
Extending cross-national and intranational studies on possible aggregate-level associations between personality dimensions and suicide prevalence, this study examined the associations of the Big Five personality factors and suicide rates across 32 regions of the Russian Federation. Failing to replicate one key finding of similar geographic studies, namely, a correspondence of higher suicide rates with lower Agreeableness and Conscientiousness (i.e., higher Psychoticism) scores, higher suicide rates corresponded to higher Agreeableness scores. This effect was obtained with one available data source (regional-level Big Five ratings based on the National Character Survey), but not with another (based on the NEO-PI-R measure). All in all, regional suicide rates across Russia were dissociated from regional variation in personality dimensions.
[Chapter 1. From the study of risks to the translation of the ethical issues of Big Data in Health].
Béranger, J
2017-10-27
Big Data substantially disrupt the medical microcosm up to challenge the paradigms of Medicine Hippocrates as we previously know. Therefore, a reflection on the study of risks associated with ethical issues around personal health data is imposed on us. Our study is based on many field surveys, interviews targeted at different actors, as well as a literature search on the subject. This work led to the realization of an innovative method of alignment of concepts of ontology of the risks to those of ontology of ethical objectives requirements of Big Data in Health. The aim is to make sense and recommendations to realization, implementation and use of personal data in order to better control it.
Big Data in Plant Science: Resources and Data Mining Tools for Plant Genomics and Proteomics.
Popescu, George V; Noutsos, Christos; Popescu, Sorina C
2016-01-01
In modern plant biology, progress is increasingly defined by the scientists' ability to gather and analyze data sets of high volume and complexity, otherwise known as "big data". Arguably, the largest increase in the volume of plant data sets over the last decade is a consequence of the application of the next-generation sequencing and mass-spectrometry technologies to the study of experimental model and crop plants. The increase in quantity and complexity of biological data brings challenges, mostly associated with data acquisition, processing, and sharing within the scientific community. Nonetheless, big data in plant science create unique opportunities in advancing our understanding of complex biological processes at a level of accuracy without precedence, and establish a base for the plant systems biology. In this chapter, we summarize the major drivers of big data in plant science and big data initiatives in life sciences with a focus on the scope and impact of iPlant, a representative cyberinfrastructure platform for plant science.
Scale factor duality for conformal cyclic cosmologies
NASA Astrophysics Data System (ADS)
Camara da Silva, U.; Alves Lima, A. L.; Sotkov, G. M.
2016-11-01
The scale factor duality is a symmetry of dilaton gravity which is known to lead to pre-big-bang cosmologies. A conformal time version of the scale factor duality (SFD) was recently implemented as a UV/IR symmetry between decelerated and accelerated phases of the post-big-bang evolution within Einstein gravity coupled to a scalar field. The problem investigated in the present paper concerns the employment of the conformal time SFD methods to the construction of pre-big-bang and cyclic extensions of these models. We demonstrate that each big-bang model gives rise to two qualitatively different pre-big-bang evolutions: a contraction/expansion SFD model and Penrose's Conformal Cyclic Cosmology (CCC). A few examples of SFD symmetric cyclic universes involving certain gauged Kähler sigma models minimally coupled to Einstein gravity are studied. We also describe the specific SFD features of the thermodynamics and the conditions for validity of the generalized second law in the case of Gauss-Bonnet (GB) extension of these selected CCC models.
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
NASA Astrophysics Data System (ADS)
Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes
2018-01-01
In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.
NASA Astrophysics Data System (ADS)
Liu, Ming; Zhao, Lindu
2012-08-01
Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.
Galán, María Gimena; Llopart, Emilce Elina; Drago, Silvina Rosa
2018-05-01
The aims were to optimise pearling process of red and white sorghum by assessing the effects of pearling time and grain moisture on endosperm yield and flour ash content and to assess nutrient and anti-nutrient losses produced by pearling different cultivars in optimised conditions. Both variables significantly affected both responses. Losses of ashes (58%), proteins (9.5%), lipids (54.5%), Na (37%), Mg (48.5%) and phenolic compounds (43%) were similar among red and white hybrids. However, losses of P (30% vs. 51%), phytic acid (47% vs. 66%), Fe (22% vs. 55%), Zn (32% vs. 62%), Ca (60% vs. 66%), K (46% vs. 61%) and Cu (51% vs. 71%) were lower for red than white sorghum due to different degree of extraction and distribution of components in the grain. Optimised pearling conditions were extrapolated to other hybrids, indicating these criteria could be applied at industrial level to obtain refined flours with proper quality and good endosperm yields.