ATLAS software configuration and build tool optimisation
NASA Astrophysics Data System (ADS)
Rybkin, Grigory; Atlas Collaboration
2014-06-01
ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
A domain specific language for performance portable molecular dynamics algorithms
NASA Astrophysics Data System (ADS)
Saunders, William Robert; Grant, James; Müller, Eike Hermann
2018-03-01
Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
A review of predictive coding algorithms.
Spratling, M W
2017-03-01
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.
Huffman coding in advanced audio coding standard
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J
2015-01-01
Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284
Synthesis of Arbitrary Quantum Circuits to Topological Assembly: Systematic, Online and Compact.
Paler, Alexandru; Fowler, Austin G; Wille, Robert
2017-09-05
It is challenging to transform an arbitrary quantum circuit into a form protected by surface code quantum error correcting codes (a variant of topological quantum error correction), especially if the goal is to minimise overhead. One of the issues is the efficient placement of magic state distillation sub circuits, so-called distillation boxes, in the space-time volume that abstracts the computation's required resources. This work presents a general, systematic, online method for the synthesis of such circuits. Distillation box placement is controlled by so-called schedulers. The work introduces a greedy scheduler generating compact box placements. The implemented software, whose source code is available at www.github.com/alexandrupaler/tqec, is used to illustrate and discuss synthesis examples. Synthesis and optimisation improvements are proposed.
Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project
2005-05-01
thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of
NASA Astrophysics Data System (ADS)
Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.
2016-10-01
In part one of this publication experimental results for a single-channel transfer line used at liquid helium (LHe) decant stations are presented. The transfer of LHe into mobile dewars is an unavoidable process since the places of storage and usage are generally located apart from each other. The experimental results have shown that reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus, generated helium cold gas has to be collected and reliquefied, demanding a huge amount of electrical energy. Although this transfer process is common in cryogenic laboratories, no existing code could be found to model it. Therefore, a thermohydraulic model has been developed to model the LHe flow at operating conditions using published heat transfer and pressure drop correlations. This paper covers the basic equations used to calculate heat transfer and pressure drop, as well as the validation of the thermohydraulic code, and its application within the optimisation process. The final transfer line design features reduced heat leak and pressure drop values based on a combined measurement and modelling campaign in the range of 0.112 < pin < 0.148 MPa, 190 < G < 450 kg/(m2 s), and 0.04 < xout < 0.12.
Weight optimization of plane truss using genetic algorithm
NASA Astrophysics Data System (ADS)
Neeraja, D.; Kamireddy, Thejesh; Santosh Kumar, Potnuru; Simha Reddy, Vijay
2017-11-01
Optimization of structure on basis of weight has many practical benefits in every engineering field. The efficiency is proportionally related to its weight and hence weight optimization gains prime importance. Considering the field of civil engineering, weight optimized structural elements are economical and easier to transport to the site. In this study, genetic optimization algorithm for weight optimization of steel truss considering its shape, size and topology aspects has been developed in MATLAB. Material strength and Buckling stability have been adopted from IS 800-2007 code of construction steel. The constraints considered in the present study are fabrication, basic nodes, displacements, and compatibility. Genetic programming is a natural selection search technique intended to combine good solutions to a problem from many generations to improve the results. All solutions are generated randomly and represented individually by a binary string with similarities of natural chromosomes, and hence it is termed as genetic programming. The outcome of the study is a MATLAB program, which can optimise a steel truss and display the optimised topology along with element shapes, deflections, and stress results.
Jolley, Rachel J; Quan, Hude; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J
2015-12-23
Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. All adults (aged ≥ 18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Optimised to Fail: Card Readers for Online Banking
NASA Astrophysics Data System (ADS)
Drimer, Saar; Murdoch, Steven J.; Anderson, Ross
The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer’s debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.
Genetically improved BarraCUDA.
Langdon, W B; Lam, Brian Yee Hong
2017-01-01
BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
Simulation studies promote technological development of radiofrequency phased array hyperthermia.
Wust, P; Seebass, M; Nadobny, J; Deuflhard, P; Mönich, G; Felix, R
1996-01-01
A treatment planning program package for radiofrequency hyperthermia has been developed. It consists of software modules for processing three-dimensional computerized tomography (CT) data sets, manual segmentation, generation of tetrahedral grids, numerical calculation and optimisation of three-dimensional E field distributions using a volume surface integral equation algorithm as well as temperature distributions using an adaptive multilevel finite-elements code, and graphical tools for simultaneous representation of CT data and simulation results. Heat treatments are limited by hot spots in healthy tissues caused by E field maxima at electrical interfaces (bone/muscle). In order to reduce or avoid hot spots suitable objective functions are derived from power deposition patterns and temperature distributions, and are utilised to optimise antenna parameters (phases, amplitudes). The simulation and optimisation tools have been applied to estimate the improvements that could be reached by upgrades of the clinically used SIGMA-60 applicator (consisting of a single ring of four antenna pairs). The investigated upgrades are increased number of antennas and channels (triple-ring of 3 x 8 antennas and variation of antenna inclination. Significant improvement of index temperatures (1-2 degrees C) is achieved by upgrading the single ring to a triple ring with free phase selection for every antenna or antenna pair. Antenna amplitudes and inclinations proved as less important parameters.
Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H. Fatih; Goren, Sezer
2016-01-01
The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed. PMID:27733925
Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H Fatih; Goren, Sezer; Aydin, Nizamettin
2016-09-01
The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area
NASA Astrophysics Data System (ADS)
Khare, Vikas; Nema, Savita; Baredar, Prashant
2017-04-01
This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.
The path toward HEP High Performance Computing
NASA Astrophysics Data System (ADS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-06-01
High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Le, Van So; Do, Zoe Phuc-Hien; Le, Minh Khoi; Le, Vicki; Le, Natalie Nha-Truc
2014-06-10
Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of "early elution schedule" was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement certainty with respect to optimising decay/measurement time and product sample activity used for QC quality control. The optimisation ensures a certainty of measurement of the specific impure radionuclide and avoids wasting the useful amount of valuable purified/concentrated daughter nuclide product. This process is important for the spectrometric measurement of very low activity of impure radionuclide contamination in the radioisotope products of much higher activity used in medical imaging and targeted radiotherapy.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
NASA Astrophysics Data System (ADS)
van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan
2017-06-01
Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.
NASA Astrophysics Data System (ADS)
Hadade, Ioan; di Mare, Luca
2016-08-01
Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.
Casemix Funding Optimisation: Working Together to Make the Most of Every Episode.
Uzkuraitis, Carly; Hastings, Karen; Torney, Belinda
2010-10-01
Eastern Health, a large public Victorian Healthcare network, conducted a WIES optimisation audit across the casemix-funded sites for separations in the 2009/2010 financial year. The audit was conducted using existing staff resources and resulted in a significant increase in casemix funding at a minimal cost. The audit showcased the skill set of existing staff and resulted in enormous benefits to the coding and casemix team by demonstrating the value of the combination of skills that makes clinical coders unique. The development of an internal web-based application allowed accurate and timely reporting of the audit results, providing the basis for a restructure of the coding and casemix service, along with approval for additional staffing resources and inclusion of a regular auditing program to focus on the creation of high quality data for research, health services management and financial reimbursement.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
NASA Astrophysics Data System (ADS)
Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue
2016-11-01
This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.
NASA Astrophysics Data System (ADS)
Briggs, J. P.; Pennycook, S. J.; Fergusson, J. R.; Jäykkä, J.; Shellard, E. P. S.
2016-04-01
We present a case study describing efforts to optimise and modernise "Modal", the simulation and analysis pipeline used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum (or three-point correlator) of the cosmic microwave background radiation. We focus on one particular element of the code: the projection of bispectra from the end of inflation to the spherical shell at decoupling, which defines the CMB we observe today. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular domain containing a sparse grid. We show that by employing separable methods this calculation can be reduced to a one-dimensional summation plus two integrations, reducing the overall dimensionality from four to three. The introduction of separable functions also solves the issue of the non-rectangular sparse grid. This separable method can become unstable in certain scenarios and so the slower non-separable integral must be calculated instead. We present a discussion of the optimisation of both approaches. We demonstrate significant speed-ups of ≈100×, arising from a combination of algorithmic improvements and architecture-aware optimisations targeted at improving thread and vectorisation behaviour. The resulting MPI/OpenMP hybrid code is capable of executing on clusters containing processors and/or coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3× and that running the same code across a combination of both microarchitectures improves performance-per-node by a factor of 3.38×. By making bispectrum calculations competitive with those for the power spectrum (or two-point correlator) we are now able to consider joint analysis for cosmological science exploitation of new data.
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
Modelling of auctioning mechanism for solar photovoltaic capacity
NASA Astrophysics Data System (ADS)
Poullikkas, Andreas
2016-10-01
In this work, a modified optimisation model for the integration of renewable energy sources for power-generation (RES-E) technologies in power-generation systems on a unit commitment basis is developed. The purpose of the modified optimisation procedure is to account for RES-E capacity auctions for different solar photovoltaic (PV) capacity electricity prices. The optimisation model developed uses a genetic algorithm (GA) technique for the calculation of the required RES-E levy (or green tax) in the electricity bills. Also, the procedure enables the estimation of the level of the adequate (or eligible) feed-in-tariff to be offered to future RES-E systems, which do not participate in the capacity auctioning procedure. In order to demonstrate the applicability of the optimisation procedure developed the case of PV capacity auctioning for commercial systems is examined. The results indicated that the required green tax, in order to promote the use of RES-E technologies, which is charged to the electricity customers through their electricity bills, is reduced with the reduction in the final auctioning price. This has a significant effect related to the reduction of electricity bills.
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
Colour coding scrubs as a means of improving perioperative communication.
Litak, Dominika
2011-05-01
Effective communication within the operating department is essential for achieving patient safety. A large part of the perioperative communication is non-verbal. One type of non-verbal communication is 'object communication', the most common form of which is clothing. The colour coding of clothing such as scrubs has the potential to optimise perioperative communication with the patients and between the staff. A colour contains a coded message, and is a visual cue for an immediate identification of personnel. This is of key importance in the perioperative environment. The idea of colour coded scrubs in the perioperative setting has not been much explored to date and, given the potential contributiontowards improvement of patient outcomes, deserves consideration.
Optimisation of a parallel ocean general circulation model
NASA Astrophysics Data System (ADS)
Beare, M. I.; Stevens, D. P.
1997-10-01
This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
1992-02-01
Division (Code RM) ONERA Office of Aeronautics & Space Technology 29 ave de la Division Leclerc NASA Hq 92320 Chfitillon Washington DC 20546 France United...Vector of thickness variables. V’ = [ t2 ........ tN Vector of thickness changes. AV ’= [rt, 5t2 ......... tNJ TI 7-9 Vector of strain derivatives. F...ds, ds, I d, 1i’,= dt, dr2 ........ dt--N Vector of buckling derivatives. dX d). , dt1 dt2 dtN Then 5F= Vs’i . AV and SX V,’. AV The linearised
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Parkhill, John; Head-Gordon, Martin
2018-03-01
We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.
USDA-ARS?s Scientific Manuscript database
We have previously identified the mycobacterial high G+C codon usage bias as a limiting factor in heterologous expression of MAP proteins from Lb.salivarius, and demonstrated that codon optimisation of a synthetic coding gene greatly enhances MAP protein production. Here, we effectively demonstrate ...
Quick, Josh; Grubaugh, Nathan D; Pullan, Steven T; Claro, Ingra M; Smith, Andrew D; Gangavarapu, Karthik; Oliveira, Glenn; Robles-Sikisaka, Refugio; Rogers, Thomas F; Beutler, Nathan A; Burton, Dennis R; Lewis-Ximenez, Lia Laura; de Jesus, Jaqueline Goes; Giovanetti, Marta; Hill, Sarah; Black, Allison; Bedford, Trevor; Carroll, Miles W; Nunes, Marcio; Alcantara, Luiz Carlos; Sabino, Ester C; Baylis, Sally A; Faria, Nuno; Loose, Matthew; Simpson, Jared T; Pybus, Oliver G; Andersen, Kristian G; Loman, Nicholas J
2018-01-01
Genome sequencing has become a powerful tool for studying emerging infectious diseases; however, genome sequencing directly from clinical samples without isolation remains challenging for viruses such as Zika, where metagenomic sequencing methods may generate insufficient numbers of viral reads. Here we present a protocol for generating coding-sequence complete genomes comprising an online primer design tool, a novel multiplex PCR enrichment protocol, optimised library preparation methods for the portable MinION sequencer (Oxford Nanopore Technologies) and the Illumina range of instruments, and a bioinformatics pipeline for generating consensus sequences. The MinION protocol does not require an internet connection for analysis, making it suitable for field applications with limited connectivity. Our method relies on multiplex PCR for targeted enrichment of viral genomes from samples containing as few as 50 genome copies per reaction. Viral consensus sequences can be achieved starting with clinical samples in 1-2 days following a simple laboratory workflow. This method has been successfully used by several groups studying Zika virus evolution and is facilitating an understanding of the spread of the virus in the Americas. PMID:28538739
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
NASA Astrophysics Data System (ADS)
Ferreira, Ana C. M.; Teixeira, Senhorinha F. C. F.; Silva, Rui G.; Silva, Ângela M.
2018-04-01
Cogeneration allows the optimal use of the primary energy sources and significant reductions in carbon emissions. Its use has great potential for applications in the residential sector. This study aims to develop a methodology for thermal-economic optimisation of small-scale micro-gas turbine for cogeneration purposes, able to fulfil domestic energy needs with a thermal power out of 125 kW. A constrained non-linear optimisation model was built. The objective function is the maximisation of the annual worth from the combined heat and power, representing the balance between the annual incomes and the expenditures subject to physical and economic constraints. A genetic algorithm coded in the java programming language was developed. An optimal micro-gas turbine able to produce 103.5 kW of electrical power with a positive annual profit (i.e. 11,925 €/year) was disclosed. The investment can be recovered in 4 years and 9 months, which is less than half of system lifetime expectancy.
Tobacco outlet density and converted versus native non-daily cigarette use in a national US sample
Kirchner, Thomas R; Anesetti-Rothermel, Andrew; Bennett, Morgane; Gao, Hong; Carlos, Heather; Scheuermann, Taneisha S; Reitzel, Lorraine R; Ahluwalia, Jasjit S
2017-01-01
Objective Investigate whether non-daily smokers’ (NDS) cigarette price and purchase preferences, recent cessation attempts, and current intentions to quit are associated with the density of the retail cigarette product landscape surrounding their residential address. Participants Cross-sectional assessment of N=904 converted NDS (CNDS). who previously smoked every day, and N=297 native NDS (NNDS) who only smoked non-daily, drawn from a national panel. Outcome measures Kernel density estimation was used to generate a nationwide probability surface of tobacco outlets linked to participants’ residential ZIP code. Hierarchically nested log-linear models were compared to evaluate associations between outlet density, non-daily use patterns, price sensitivity and quit intentions. Results Overall, NDS in ZIP codes with greater outlet density were less likely than NDS in ZIP codes with lower outlet density to hold 6-month quit intentions when they also reported that price affected use patterns (G2=66.1, p<0.001) and purchase locations (G2=85.2, p<0.001). CNDS were more likely than NNDS to reside in ZIP codes with higher outlet density (G2=322.0, p<0.001). Compared with CNDS in ZIP codes with lower outlet density, CNDS in high-density ZIP codes were more likely to report that price influenced the amount they smoke (G2=43.9, p<0.001), and were more likely to look for better prices (G2=59.3, p<0.001). NDS residing in high-density ZIP codes were not more likely to report that price affected their cigarette brand choice compared with those in ZIP codes with lower density. Conclusions This paper provides initial evidence that the point-of-sale cigarette environment may be differentially associated with the maintenance of CNDS versus NNDS patterns. Future research should investigate how tobacco control efforts can be optimised to both promote cessation and curb the rising tide of non-daily smoking in the USA. PMID:26969172
Optimising operational amplifiers by evolutionary algorithms and gm/Id method
NASA Astrophysics Data System (ADS)
Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.
2016-10-01
The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.
NASA Astrophysics Data System (ADS)
Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng
2018-04-01
It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2011-03-01
Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.
Optimisation of Combined Cycle Gas Turbine Power Plant in Intraday Market: Riga CHP-2 Example
NASA Astrophysics Data System (ADS)
Ivanova, P.; Grebesh, E.; Linkevics, O.
2018-02-01
In the research, the influence of optimised combined cycle gas turbine unit - according to the previously developed EM & OM approach with its use in the intraday market - is evaluated on the generation portfolio. It consists of the two combined cycle gas turbine units. The introduced evaluation algorithm saves the power and heat balance before and after the performance of EM & OM approach by making changes in the generation profile of units. The aim of this algorithm is profit maximisation of the generation portfolio. The evaluation algorithm is implemented in multi-paradigm numerical computing environment MATLab on the example of Riga CHP-2. The results show that the use of EM & OM approach in the intraday market can be profitable or unprofitable. It depends on the initial state of generation units in the intraday market and on the content of the generation portfolio.
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Shukla, Anupam
2018-03-01
Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Kumaravel
2017-02-01
The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method
Multi-phase SPH modelling of violent hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.
2015-11-01
This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.
hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, M.; Rojas, R.
2012-04-01
Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm Optimisation (IPSO), Fully Informed Particle Swarm (FIPS), and weighted FIPS (wFIPS). Finally, an advanced sensitivity analysis using the Latin Hypercube One-At-a-Time (LH-OAT) method and user-friendly plotting summaries facilitate the interpretation and assessment of the calibration/optimisation results. We validate hydroPSO against the standard PSO algorithm (SPSO-2007) employing five test functions commonly used to assess the performance of optimisation algorithms. Additionally, we illustrate how the performance of the optimization/calibration engine is boosted by using several of the fine-tune options included in hydroPSO. Finally, we show how to interface SWAT-2005 with hydroPSO to calibrate a semi-distributed hydrological model for the Ega River basin in Spain, and how to interface MODFLOW-2000 and hydroPSO to calibrate a groundwater flow model for the regional aquifer of the Pampa del Tamarugal in Chile. We limit the applications of hydroPSO to study cases dealing with surface water and groundwater models as these two are the authors' areas of expertise. However, based on the flexibility of hydroPSO we believe this package can be implemented to any model code requiring some form of parameter estimation.
Rushton, A; White, L; Heap, A; Heneghan, N; Goodwin, P
2016-01-01
Objectives To develop an optimised 1:1 physiotherapy intervention that reflects best practice, with flexibility to tailor management to individual patients, thereby ensuring patient-centred practice. Design Mixed-methods combining evidence synthesis, expert review and focus groups. Setting Secondary care involving 5 UK specialist spinal centres. Participants A purposive panel of clinical experts from the 5 spinal centres, comprising spinal surgeons, inpatient and outpatient physiotherapists, provided expert review of the draft intervention. Purposive samples of patients (n=10) and physiotherapists (n=10) (inpatient/outpatient physiotherapists managing patients with lumbar discectomy) were invited to participate in the focus groups at 1 spinal centre. Methods A draft intervention developed from 2 systematic reviews; a survey of current practice and research related to stratified care was circulated to the panel of clinical experts. Lead physiotherapists collaborated with physiotherapy and surgeon colleagues to provide feedback that informed the intervention presented at 2 focus groups investigating acceptability to patients and physiotherapists. The focus groups were facilitated by an experienced facilitator, recorded in written and tape-recorded forms by an observer. Tape recordings were transcribed verbatim. Data analysis, conducted by 2 independent researchers, employed an iterative and constant comparative process of (1) initial descriptive coding to identify categories and subsequent themes, and (2) deeper, interpretive coding and thematic analysis enabling concepts to emerge and overarching pattern codes to be identified. Results The intervention reflected best available evidence and provided flexibility to ensure patient-centred care. The intervention comprised up to 8 sessions of 1:1 physiotherapy over 8 weeks, starting 4 weeks postsurgery. The intervention was acceptable to patients and physiotherapists. Conclusions A rigorous process informed an optimised 1:1 physiotherapy intervention post-lumbar discectomy that reflects best practice. The developed intervention was agreed on by the 5 spinal centres for implementation in a randomised controlled trial to evaluate its effectiveness. PMID:26916690
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Keogh, Pauraic; Ray, Noel J; Lynch, Christopher D; Burke, Francis M; Hannigan, Ailish
2004-12-01
This investigation determined the minimum exposure times consistent with optimised surface microhardness parameters for a commercial resin composite cured using a "first-generation" light-emitting diode activation lamp. Disk specimens were exposed and surface microhardness numbers measured at the top and bottom surfaces for elapsed times of 1 hour and 24 hours. Bottom/top microhardness number ratios were also calculated. Most microhardness data increased significantly over the elapsed time interval but microhardness ratios (bottom/top) were dependent on exposure time only. A minimum exposure of 40 secs is appropriate to optimise microhardness parameters for the combination of resin composite and lamp investigated.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
Molony, D; Beame, C; Behan, W; Crowley, J; Dennehy, T; Quinlan, M; Cullen, W
2016-11-01
While considerable changes are happening in primary care in Ireland and considerable potential exists in intelligence derived from practice-based data to inform these changes, relatively few large-scale general morbidity surveys have been published. To examine the most common reasons why people attend primary care, specifically 'reasons for encounter' (RFEs) among the general practice population and among specific demographic groups (i.e., young children and older adults). We retrospectively examined clinical encounters (which had a diagnostic code) over a 4-year time period. Descriptive analyses were conducted on anonymised data. 70,489 RFEs consultations were recorded (mean 13.53 recorded RFEs per person per annum) and consultations involving multiple RFEs were common. RFE categories for which codes were most commonly recorded were: 'general/unspecified' (31.6 %), 'respiratory' (15.4 %) and 'musculoskeletal' (12.6 %). Most commonly recorded codes were: 'medication renewal' (6.8 %), 'cough' (6.6 %), and 'health maintenance/prevention' (5.8 %). There was considerable variation in the number of RFEs recorded per age group. 6239 RFEs (8.9 %) were recorded by children under 6 years and 15,295 RFEs (21.7 %) were recorded by adults aged over 70. RFEs recorded per calendar month increased consistently through the study period and there was a marked seasonal and temporal variation in the number of RFEs recorded. Practice databases can generate intelligence on morbidity and health service utilisation in the community. Future research to optimise diagnostic coding at a practice level and to promote this activity in a more representative sample of practices is a priority.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
Improving Vector Evaluated Particle Swarm Optimisation Using Multiple Nondominated Leaders
Lim, Kian Sheng; Buyamin, Salinda; Ahmad, Anita; Shapiai, Mohd Ibrahim; Naim, Faradila; Mubin, Marizan; Kim, Dong Hwa
2014-01-01
The vector evaluated particle swarm optimisation (VEPSO) algorithm was previously improved by incorporating nondominated solutions for solving multiobjective optimisation problems. However, the obtained solutions did not converge close to the Pareto front and also did not distribute evenly over the Pareto front. Therefore, in this study, the concept of multiple nondominated leaders is incorporated to further improve the VEPSO algorithm. Hence, multiple nondominated solutions that are best at a respective objective function are used to guide particles in finding optimal solutions. The improved VEPSO is measured by the number of nondominated solutions found, generational distance, spread, and hypervolume. The results from the conducted experiments show that the proposed VEPSO significantly improved the existing VEPSO algorithms. PMID:24883386
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
Performance benchmark of LHCb code on state-of-the-art x86 architectures
NASA Astrophysics Data System (ADS)
Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.
2015-12-01
For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
NASA Astrophysics Data System (ADS)
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
Tobacco outlet density and converted versus native non-daily cigarette use in a national US sample.
Kirchner, Thomas R; Anesetti-Rothermel, Andrew; Bennett, Morgane; Gao, Hong; Carlos, Heather; Scheuermann, Taneisha S; Reitzel, Lorraine R; Ahluwalia, Jasjit S
2017-01-01
Investigate whether non-daily smokers' (NDS) cigarette price and purchase preferences, recent cessation attempts, and current intentions to quit are associated with the density of the retail cigarette product landscape surrounding their residential address. Cross-sectional assessment of N=904 converted NDS (CNDS). who previously smoked every day, and N=297 native NDS (NNDS) who only smoked non-daily, drawn from a national panel. Kernel density estimation was used to generate a nationwide probability surface of tobacco outlets linked to participants' residential ZIP code. Hierarchically nested log-linear models were compared to evaluate associations between outlet density, non-daily use patterns, price sensitivity and quit intentions. Overall, NDS in ZIP codes with greater outlet density were less likely than NDS in ZIP codes with lower outlet density to hold 6-month quit intentions when they also reported that price affected use patterns (G 2 =66.1, p<0.001) and purchase locations (G 2 =85.2, p<0.001). CNDS were more likely than NNDS to reside in ZIP codes with higher outlet density (G 2 =322.0, p<0.001). Compared with CNDS in ZIP codes with lower outlet density, CNDS in high-density ZIP codes were more likely to report that price influenced the amount they smoke (G 2 =43.9, p<0.001), and were more likely to look for better prices (G 2 =59.3, p<0.001). NDS residing in high-density ZIP codes were not more likely to report that price affected their cigarette brand choice compared with those in ZIP codes with lower density. This paper provides initial evidence that the point-of-sale cigarette environment may be differentially associated with the maintenance of CNDS versus NNDS patterns. Future research should investigate how tobacco control efforts can be optimised to both promote cessation and curb the rising tide of non-daily smoking in the USA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
Rushton, A; White, L; Heap, A; Calvert, M; Heneghan, N; Goodwin, P
2016-02-25
To develop an optimised 1:1 physiotherapy intervention that reflects best practice, with flexibility to tailor management to individual patients, thereby ensuring patient-centred practice. Mixed-methods combining evidence synthesis, expert review and focus groups. Secondary care involving 5 UK specialist spinal centres. A purposive panel of clinical experts from the 5 spinal centres, comprising spinal surgeons, inpatient and outpatient physiotherapists, provided expert review of the draft intervention. Purposive samples of patients (n=10) and physiotherapists (n=10) (inpatient/outpatient physiotherapists managing patients with lumbar discectomy) were invited to participate in the focus groups at 1 spinal centre. A draft intervention developed from 2 systematic reviews; a survey of current practice and research related to stratified care was circulated to the panel of clinical experts. Lead physiotherapists collaborated with physiotherapy and surgeon colleagues to provide feedback that informed the intervention presented at 2 focus groups investigating acceptability to patients and physiotherapists. The focus groups were facilitated by an experienced facilitator, recorded in written and tape-recorded forms by an observer. Tape recordings were transcribed verbatim. Data analysis, conducted by 2 independent researchers, employed an iterative and constant comparative process of (1) initial descriptive coding to identify categories and subsequent themes, and (2) deeper, interpretive coding and thematic analysis enabling concepts to emerge and overarching pattern codes to be identified. The intervention reflected best available evidence and provided flexibility to ensure patient-centred care. The intervention comprised up to 8 sessions of 1:1 physiotherapy over 8 weeks, starting 4 weeks postsurgery. The intervention was acceptable to patients and physiotherapists. A rigorous process informed an optimised 1:1 physiotherapy intervention post-lumbar discectomy that reflects best practice. The developed intervention was agreed on by the 5 spinal centres for implementation in a randomised controlled trial to evaluate its effectiveness. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Comparison of H.265/HEVC encoders
NASA Astrophysics Data System (ADS)
Trochimiuk, Maciej
2016-09-01
The H.265/HEVC is the state-of-the-art video compression standard, which allows the bitrate reduction up to 50% compared with its predecessor, H.264/AVC, maintaining equal perceptual video quality. The growth in coding efficiency was achieved by increasing the number of available intra- and inter-frame prediction features and improvements in existing ones, such as entropy encoding and filtering. Nevertheless, to achieve real-time performance of the encoder, simplifications in algorithm are inevitable. Some features and coding modes shall be skipped, to reduce time needed to evaluate modes forwarded to rate-distortion optimisation. Thus, the potential acceleration of the encoding process comes at the expense of coding efficiency. In this paper, a trade-off between video quality and encoding speed of various H.265/HEVC encoders is discussed.
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.
2012-01-01
Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279
The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.
2012-04-01
For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.
NASA Astrophysics Data System (ADS)
Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.
2014-05-01
In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.
Fadyl, Joanna K; Channon, Alexis; Theadom, Alice; McPherson, Kathryn M
2017-04-01
Knowledge about aspects that influence recovery and adaptation in the postacute phase of disabling health events is key to understanding how best to provide appropriate rehabilitation and health services. Qualitative longitudinal research makes it possible to look for patterns, key time points and critical moments that could be vital for interventions and supports. However, strategies that support robust data management and analysis for longitudinal qualitative research in health-care are not well documented in the literature. This article reviews three challenges encountered in a large longitudinal qualitative descriptive study about experiences of recovery and adaptation after traumatic brain injury in New Zealand, and the strategies and technologies used to address them. These were (i) tracking coding and analysis decisions during an extended analysis period; (ii) navigating interpretations over time and in response to new data; and (iii) exploiting data volume and complexity. Concept mapping during coding review, a considered combination of information technologies, employing both cross-sectional and narrative analysis, and an expectation that subanalyses would be required for key topics helped us manage the study in a way that facilitated useful and novel insights. These strategies could be applied in other qualitative longitudinal studies in healthcare inquiry to optimise data analysis and stimulate important insights. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson
2018-06-01
Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.
NASA Astrophysics Data System (ADS)
Kies, Alexander
2018-02-01
To meet European decarbonisation targets by 2050, the electrification of the transport sector is mandatory. Most electric vehicles rely on lithium-ion batteries, because they have a higher energy/power density and longer life span compared to other practical batteries such as zinc-carbon batteries. Electric vehicles can thus provide energy storage to support the system integration of generation from highly variable renewable sources, such as wind and photovoltaics (PV). However, charging/discharging causes batteries to degradate progressively with reduced capacity. In this study, we investigate the impact of the joint optimisation of arbitrage revenue and battery degradation of electric vehicle batteries in a simplified setting, where historical prices allow for market participation of battery electric vehicle owners. It is shown that the joint optimisation of both leads to stronger gains then the sum of both optimisation strategies and that including battery degradation into the model avoids state of charges close to the maximum at times. It can be concluded that degradation is an important aspect to consider in power system models, which incorporate any kind of lithium-ion battery storage.
NASA Astrophysics Data System (ADS)
Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.
2016-04-01
The transfer of liquid helium (LHe) into mobile dewars or transport vessels is a common and unavoidable process at LHe decant stations. During this transfer reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus generated helium gas needs to be collected and reliquefied which requires a huge amount of electrical energy. Therefore, the design of transfer lines used at LHe decant stations has been optimised to establish a LHe transfer with minor evaporation losses which increases the overall efficiency and capacity of LHe decant stations. This paper presents the experimental results achieved during the thermohydraulic optimisation of a flexible LHe transfer line. An extensive measurement campaign with a set of dedicated transfer lines equipped with pressure and temperature sensors led to unique experimental data of this specific transfer process. The experimental results cover the heat leak, the pressure drop, the transfer rate, the outlet quality, and the cool-down and warm-up behaviour of the examined transfer lines. Based on the obtained results the design of the considered flexible transfer line has been optimised, featuring reduced heat leak and pressure drop.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
Koo, B K; O'Connell, P E
2006-04-01
The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.
Photonic simulation of entanglement growth and engineering after a spin chain quench.
Pitsios, Ioannis; Banchi, Leonardo; Rab, Adil S; Bentivegna, Marco; Caprara, Debora; Crespi, Andrea; Spagnolo, Nicolò; Bose, Sougato; Mataloni, Paolo; Osellame, Roberto; Sciarrino, Fabio
2017-11-17
The time evolution of quantum many-body systems is one of the most important processes for benchmarking quantum simulators. The most curious feature of such dynamics is the growth of quantum entanglement to an amount proportional to the system size (volume law) even when interactions are local. This phenomenon has great ramifications for fundamental aspects, while its optimisation clearly has an impact on technology (e.g., for on-chip quantum networking). Here we use an integrated photonic chip with a circuit-based approach to simulate the dynamics of a spin chain and maximise the entanglement generation. The resulting entanglement is certified by constructing a second chip, which measures the entanglement between multiple distant pairs of simulated spins, as well as the block entanglement entropy. This is the first photonic simulation and optimisation of the extensive growth of entanglement in a spin chain, and opens up the use of photonic circuits for optimising quantum devices.
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
OpenFOAM: Open source CFD in research and industry
NASA Astrophysics Data System (ADS)
Jasak, Hrvoje
2009-12-01
The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.
Jeffries, Mark; Phipps, Denham; Howard, Rachel L; Avery, Anthony; Rodgers, Sarah; Ashcroft, Darren
2017-05-10
Using strong structuration theory, we aimed to understand the adoption and implementation of an electronic clinical audit and feedback tool to support medicine optimisation for patients in primary care. This is a qualitative study informed by strong structuration theory. The analysis was thematic, using a template approach. An a priori set of thematic codes, based on strong structuration theory, was developed from the literature and applied to the transcripts. The coding template was then modified through successive readings of the data. Clinical commissioning group in the south of England. Four focus groups and five semi-structured interviews were conducted with 18 participants purposively sampled from a range of stakeholder groups (general practitioners, pharmacists, patients and commissioners). Using the system could lead to improved medication safety, but use was determined by broad institutional contexts; by the perceptions, dispositions and skills of users; and by the structures embedded within the technology. These included perceptions of the system as new and requiring technical competence and skill; the adoption of the system for information gathering; and interactions and relationships that involved individual, shared or collective use. The dynamics between these external, internal and technological structures affected the adoption and implementation of the system. Successful implementation of information technology interventions for medicine optimisation will depend on a combination of the infrastructure within primary care, social structures embedded in the technology and the conventions, norms and dispositions of those utilising it. Future interventions, using electronic audit and feedback tools to improve medication safety, should consider the complexity of the social and organisational contexts and how internal and external structures can affect the use of the technology in order to support effective implementation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.
2016-12-01
The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario studied, a range of array formations is found expressing varying preferences for either functional. Further analyses then allow for the identification of trade-offs between the two key societal objectives of energy production and conservation. This in turn produces information valuable to stakeholders and policymakers when making decisions on array design.
Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter
2007-01-01
A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.
Optimisation of SIW bandpass filter with wide and sharp stopband using space mapping
NASA Astrophysics Data System (ADS)
Xu, Juan; Bi, Jun Jian; Li, Zhao Long; Chen, Ru shan
2016-12-01
This work presents a substrate integrated waveguide (SIW) bandpass filter with wide and precipitous stopband, which is different from filters with a direct input/output coupling structure. Higher modes in the SIW cavities are used to generate the finite transmission zeros for improved stopband performance. The design of SIW filters requires full wave electromagnetic simulation and extensive optimisation. If a full wave solver is used for optimisation, the design process is very time consuming. The space mapping (SM) approach has been called upon to alleviate this problem. In this case, the coarse model is optimised using an equivalent circuit model-based representation of the structure for fast computations. On the other hand, the verification of the design is completed with an accurate fine model full wave simulation. A fourth-order filter with a passband of 12.0-12.5 GHz is fabricated on a single layer Rogers RT/Duroid 5880 substrate. The return loss is better than 17.4 dB in the passband and the rejection is more than 40 dB in the stopband. The stopband is from 2 to 11 GHz and 13.5 to 17.3 GHz, demonstrating a wide bandwidth performance.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-06-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
McStas event logger: Definition and applications
NASA Astrophysics Data System (ADS)
Bergbäck Knudsen, Erik; Bryndt Klinkby, Esben; Kjær Willendrup, Peter
2014-02-01
Functionality is added to the McStas neutron ray-tracing code, which allows individual neutron states before and after a scattering to be temporarily stored, and analysed. This logging mechanism has multiple uses, including studies of longitudinal intensity loss in neutron guides and guide coating design optimisations. Furthermore, the logging method enables the cold/thermal neutron induced gamma background along the guide to be calculated from the un-reflected neutron, using a recently developed MCNPX-McStas interface.
Comparison of simulation and experimental results for a gas puff nozzle on Ambiorix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnier, J-N.; Chevalier, J-M.; Dubroca, B.
One of source term of Z-Pinch experiments is the gas puff density profile. In order to characterize the gas jet, an experiment based on interferometry has been performed. The first study was a point measurement (a section density profile) which led us to develop a global and instantaneous interferometry imaging method. In order to optimise the nozzle, we simulated the experiment with a flow calculation code (ARES). In this paper, the experimental results are compared with simulations. The different gas properties (He, Ne, Ar) and the flow duration lead us to take care, on the one hand, of the gasmore » viscosity, and on the other, of modifying the code for an instationary flow.« less
Design and analysis of magneto rheological fluid brake for an all terrain vehicle
NASA Astrophysics Data System (ADS)
George, Luckachan K.; Tamilarasan, N.; Thirumalini, S.
2018-02-01
This work presents an optimised design for a magneto rheological fluid brake for all terrain vehicles. The actuator consists of a disk which is immersed in the magneto rheological fluid surrounded by an electromagnet. The braking torque is controlled by varying the DC current applied to the electromagnet. In the presence of a magnetic field, the magneto rheological fluid particle aligns in a chain like structure, thus increasing the viscosity. The shear stress generated causes friction in the surfaces of the rotating disk. Electromagnetic analysis of the proposed system is carried out using finite element based COMSOL multi-physics software and the amount of magnetic field generated is calculated with the help of COMSOL. The geometry is optimised and performance of the system in terms of braking torque is carried out. Proposed design reveals better performance in terms of braking torque from the existing literature.
Generating Code Review Documentation for Auto-Generated Mission-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2009-01-01
Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
Rudall, Nicola; McKenzie, Catherine; Landa, June; Bourne, Richard S; Bates, Ian; Shulman, Rob
2017-08-01
Clinical pharmacist (CP) interventions from the PROTECTED-UK cohort, a multi-site critical care interventions study, were further analysed to assess effects of: time on critical care, number of interventions, CP expertise and days of week, on impact of intervention and ultimately contribution to patient care. Intervention data were collected from 21 adult critical care units over 14 days. Interventions could be error, optimisation or consults, and were blind-coded to ensure consistency, prior to bivariate analysis. Pharmacy service demographics were further collated by investigator survey. Of the 20 758 prescriptions reviewed, 3375 interventions were made (intervention rate 16.1%). CPs spent 3.5 h per day (mean, ±SD 1.7) on direct patient care, reviewed 10.3 patients per day (±SD 4.2) and required 22.5 min (±SD 9.5) per review. Intervention rate had a moderate inverse correlation with the time the pharmacist spent on critical care (P = 0.05; r = 0.4). Optimisation rate had a strong inverse association with total number of prescriptions reviewed per day (P = 0.001; r = 0.7). A consultant CP had a moderate inverse correlation with number of errors identified (P = 0.008; r = 0.6). No correlation existed between the presence of electronic prescribing in critical care and any intervention rate. Few centres provided weekend services, although the intervention rate was significantly higher on weekends than weekdays. A CP is essential for safe and optimised patient medication therapy; an extended and developed pharmacy service is expected to reduce errors. CP services should be adequately staffed to enable adequate time for prescription review and maximal therapy optimisation. © 2016 Royal Pharmaceutical Society.
Cheong, Vee San; Bull, Anthony M J
2015-12-16
The choice of coordinate system and alignment of bone will affect the quantification of mechanical properties obtained during in-vitro biomechanical testing. Where these are used in predictive models, such as finite element analysis, the fidelic description of these properties is paramount. Currently in bending and torsional tests, bones are aligned on a pre-defined fixed span based on the reference system marked out. However, large inter-specimen differences have been reported. This suggests a need for the development of a specimen-specific alignment system for use in experimental work. Eleven ovine tibiae were used in this study and three-dimensional surface meshes were constructed from micro-Computed Tomography scan images. A novel, semi-automated algorithm was developed and applied to the surface meshes to align the whole bone based on its calculated principal directions. Thereafter, the code isolates the optimised location and length of each bone for experimental testing. This resulted in a lowering of the second moment of area about the chosen bending axis in the central region. More importantly, the optimisation method decreases the irregularity of the shape of the cross-sectional slices as the unbiased estimate of the population coefficient of variation of the second moment of area decreased from a range of (0.210-0.435) to (0.145-0.317) in the longitudinal direction, indicating a minimisation of the product moment, which causes eccentric loading. Thus, this methodology serves as an important pre-step to align the bone for mechanical tests or simulation work, is optimised for each specimen, ensures repeatability, and is general enough to be applied to any long bone. Copyright © 2015 Elsevier Ltd. All rights reserved.
Derimay, François; Souteyrand, Geraud; Motreff, Pascal; Rioufol, Gilles; Finet, Gerard
2017-10-13
The rePOT (proximal optimisation technique) sequence proved significantly more effective than final kissing balloon (FKB) with two drug-eluting stents (DES) in a bench test. We sought to validate efficacy experimentally in a large range of latest-generation DES. On left main fractal coronary bifurcation bench models, five samples of each of the six main latest-generation DES (Coroflex ISAR, Orsiro, Promus PREMIER, Resolute Integrity, Ultimaster, XIENCE Xpedition) were implanted on rePOT (initial POT, side branch inflation, final POT). Proximal elliptical ratio, side branch obstruction (SBO), stent overstretch and strut malapposition were quantified on 2D and 3D OCT. Results were compared to FKB with Promus PREMIER. Whatever the design, rePOT maintained vessel circularity compared to FKB: elliptical ratio, 1.02±0.01 to 1.04±0.01 vs. 1.26±0.02 (p<0.05). Global strut malapposition was much lower: 2.6±1.4% to 0.1±0.2% vs. 40.4±8.4% for FKB (p<0.05). However, only Promus PREMIER and XIENCE Xpedition achieved significantly less SBO: respectively, 5.6±3.5% and 10.0±5.3% vs. 23.5±5.7% for FKB (p<0.05). Platform design differences had little influence on the excellent results of rePOT versus FKB. RePOT optimised strut apposition without proximal elliptical deformation in the six main latest-generation DES. Thickness and design characteristics seemed relevant for optimising SBO.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
Natural Language Interface for Safety Certification of Safety-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2011-01-01
Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.
Keates, Tracy; Cooper, Christopher D O; Savitsky, Pavel; Allerston, Charles K; Phillips, Claire; Hammarström, Martin; Daga, Neha; Berridge, Georgina; Mahajan, Pravin; Burgess-Brown, Nicola A; Müller, Susanne; Gräslund, Susanne; Gileadi, Opher
2012-06-15
The generation of affinity reagents to large numbers of human proteins depends on the ability to express the target proteins as high-quality antigens. The Structural Genomics Consortium (SGC) focuses on the production and structure determination of human proteins. In a 7-year period, the SGC has deposited crystal structures of >800 human protein domains, and has additionally expressed and purified a similar number of protein domains that have not yet been crystallised. The targets include a diversity of protein domains, with an attempt to provide high coverage of protein families. The family approach provides an excellent basis for characterising the selectivity of affinity reagents. We present a summary of the approaches used to generate purified human proteins or protein domains, a test case demonstrating the ability to rapidly generate new proteins, and an optimisation study on the modification of >70 proteins by biotinylation in vivo. These results provide a unique synergy between large-scale structural projects and the recent efforts to produce a wide coverage of affinity reagents to the human proteome. Copyright © 2011 Elsevier B.V. All rights reserved.
A novel probabilistic approach to generating PTV with partial voxel contributions
NASA Astrophysics Data System (ADS)
Tsang, H. S.; Kamerling, C. P.; Ziegenhein, P.; Nill, S.; Oelfke, U.
2017-06-01
Radiotherapy treatment planning for use with high-energy photon beams currently employs a binary approach in defining the planning target volume (PTV). We propose a margin concept that takes the beam directions into account, generating beam-dependent PTVs (bdPTVs) on a beam-by-beam basis. The resulting degree of overlaps between the bdPTVs are used within the optimisation process; the optimiser effectively considers the same voxel to be both target and organ at risk (OAR) with fractional contributions. We investigate the impact of this novel approach when applied to prostate radiotherapy treatments, and compare treatment plans generated using beam dependent margins to conventional margins. Five prostate patients were used in this planning study, and plans using beam dependent margins improved the sparing of high doses to target-surrounding OARs, though a trade-off in delivering additional low dose to the OARs can be observed. Plans using beam dependent margins are observed to have a slightly reduced target coverage. Nevertheless, all plans are able to satisfy 90% population coverage with the target receiving at least 95% of the prescribed dose to D98% .
Keates, Tracy; Cooper, Christopher D.O.; Savitsky, Pavel; Allerston, Charles K.; Phillips, Claire; Hammarström, Martin; Daga, Neha; Berridge, Georgina; Mahajan, Pravin; Burgess-Brown, Nicola A.; Müller, Susanne; Gräslund, Susanne; Gileadi, Opher
2012-01-01
The generation of affinity reagents to large numbers of human proteins depends on the ability to express the target proteins as high-quality antigens. The Structural Genomics Consortium (SGC) focuses on the production and structure determination of human proteins. In a 7-year period, the SGC has deposited crystal structures of >800 human protein domains, and has additionally expressed and purified a similar number of protein domains that have not yet been crystallised. The targets include a diversity of protein domains, with an attempt to provide high coverage of protein families. The family approach provides an excellent basis for characterising the selectivity of affinity reagents. We present a summary of the approaches used to generate purified human proteins or protein domains, a test case demonstrating the ability to rapidly generate new proteins, and an optimisation study on the modification of >70 proteins by biotinylation in vivo. These results provide a unique synergy between large-scale structural projects and the recent efforts to produce a wide coverage of affinity reagents to the human proteome. PMID:22027370
Astroparticle and neutrino oscillation research with KM3NeT
NASA Astrophysics Data System (ADS)
Kulikovskiy, V.
2017-05-01
Two next generation underwater neutrino telescopes are under construction in the Mediterranean sea by the KM3NeT Collaboration. The first, ORCA, optimised for atmospheric neutrinos detection will be capable to determine the neutrino mass hierarchy with >3{σ} after three years of operation, i.e. as early as 2023. The second, ARCA, is optimised for high energy neutrino astronomy. Its location allows for surveying most of the Galactic Plane, including the Galactic Centre and the most promising source candidates. The neutrino diffuse emission flux measured by the IceCube Collaboration can be observed with 5{σ} in less than one year.
Quail, Michael A; Gu, Yong; Swerdlow, Harold; Mayho, Matthew
2012-12-01
Size selection can be a critical step in preparation of next-generation sequencing libraries. Traditional methods employing gel electrophoresis lack reproducibility, are labour intensive, do not scale well and employ hazardous interchelating dyes. In a high-throughput setting, solid-phase reversible immobilisation beads are commonly used for size-selection, but result in quite a broad fragment size range. We have evaluated and optimised the use of two semi-automated preparative DNA electrophoresis systems, the Caliper Labchip XT and the Sage Science Pippin Prep, for size selection of Illumina sequencing libraries. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Chen, Yu-Ren; Dye, Chung-Yuan
2013-06-01
In most of the inventory models in the literature, the deterioration rate of goods is viewed as an exogenous variable, which is not subject to control. In the real market, the retailer can reduce the deterioration rate of product by making effective capital investment in storehouse equipments. In this study, we formulate a deteriorating inventory model with time-varying demand by allowing preservation technology cost as a decision variable in conjunction with replacement policy. The objective is to find the optimal replenishment and preservation technology investment strategies while minimising the total cost over the planning horizon. For any given feasible replenishment scheme, we first prove that the optimal preservation technology investment strategy not only exists but is also unique. Then, a particle swarm optimisation is coded and used to solve the nonlinear programming problem by employing the properties derived from this article. Some numerical examples are used to illustrate the features of the proposed model.
The Sensitivity of Coded Mask Telescopes
NASA Technical Reports Server (NTRS)
Skinner, Gerald K.
2008-01-01
Simple formulae are often used to estimate the sensitivity of coded mask X-ray or gamma-ray telescopes, but t,hese are strictly only applicable if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given which allows the calculation of the sensitivity. We consider certain aspects of the optimisation of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.
Optimised mounting conditions for poly (ether sulfone) in radiation detection.
Nakamura, Hidehito; Shirakawa, Yoshiyuki; Sato, Nobuhiro; Yamada, Tatsuya; Kitamura, Hisashi; Takahashi, Sentaro
2014-09-01
Poly (ether sulfone) (PES) is a candidate for use as a scintillation material in radiation detection. Its characteristics, such as its emission spectrum and its effective refractive index (based on the emission spectrum), directly affect the propagation of light generated to external photodetectors. It is also important to examine the presence of background radiation sources in manufactured PES. Here, we optimise the optical coupling and surface treatment of the PES, and characterise its background. Optical grease was used to enhance the optical coupling between the PES and the photodetector; absorption by the grease of short-wavelength light emitted from PES was negligible. Diffuse reflection induced by surface roughening increased the light yield for PES, despite the high effective refractive index. Background radiation derived from the PES sample and its impurities was negligible above the ambient, natural level. Overall, these results serve to optimise the mounting conditions for PES in radiation detection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.
Kramar, A; Turk, S; Vrecer, F
2003-04-30
The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.
Hosseinkhani, Baharak; Hennebel, Tom; Boon, Nico
2014-09-25
Fermentative production of bio-hydrogen (bio-H2) from organic residues has emerged as a promising alternative for providing the required electron source for hydrogen driven remediation strategies. Unlike the widely used production of H2 by bacteria in fresh water systems, few reports are available regarding the generation of biogenic H2 and optimisation processes in marine systems. The present research aims to optimise the capability of an indigenous marine bacterium for the production of bio-H2 in marine environments and subsequently develop this process for hydrogen driven remediation strategies. Fermentative conversion of organics in marine media to H2 using a marine isolate, Pseudoalteromonas sp. BH11, was determined. A Taguchi design of experimental methodology was employed to evaluate the optimal nutritional composition in batch tests to improve bio-H2 yields. Further optimisation experiments showed that alginate-immobilised bacterial cells were able to produce bio-H2 at the same rate as suspended cells over a period of several weeks. Finally, bio-H2 was used as electron donor to successfully dehalogenate trichloroethylene (TCE) using biogenic palladium nanoparticles as a catalyst. Fermentative production of bio-H2 can be a promising technique for concomitant generation of an electron source for hydrogen driven remediation strategies and treatment of organic residue in marine ecosystems. Copyright © 2014 Elsevier B.V. All rights reserved.
From Verified Models to Verifiable Code
NASA Technical Reports Server (NTRS)
Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.
Díaz-Dinamarca, Diego A; Jerias, José I; Soto, Daniel A; Soto, Jorge A; Díaz, Natalia V; Leyton, Yessica Y; Villegas, Rodrigo A; Kalergis, Alexis M; Vásquez, Abel E
2018-03-01
Group B Streptococcus (GBS) is the leading cause of neonatal meningitis and a common pathogen in livestock and aquaculture industries around the world. Conjugate polysaccharide and protein-based vaccines are under development. The surface immunogenic protein (SIP) is a conserved protein in all GBS serotypes and has been shown to be a good target for vaccine development. The expression of recombinant proteins in Escherichia coli cells has been shown to be useful in the development of vaccines, and the protein purification is a factor affecting their immunogenicity. The response surface methodology (RSM) and Box-Behnken design can optimise the performance in the expression of recombinant proteins. However, the biological effect in mice immunised with an immunogenic protein that is optimised by RSM and purified by low-affinity chromatography is unknown. In this study, we used RSM for the optimisation of the expression of the rSIP, and we evaluated the SIP-specific humoral response and the property to decrease the GBS colonisation in the vaginal tract in female mice. It was observed by NI-NTA chromatography that the RSM increases the yield in the expression of rSIP, generating a better purification process. This improvement in rSIP purification suggests a better induction of IgG anti-SIP immune response and a positive effect in the decreased GBS intravaginal colonisation. The RSM applied to optimise the expression of recombinant proteins with immunogenic capacity is an interesting alternative in the evaluation of vaccines in preclinical phase, which could improve their immune response.
Optimal control of Formula One car energy recovery systems
NASA Astrophysics Data System (ADS)
Limebeer, D. J. N.; Perantoni, G.; Rao, A. V.
2014-10-01
The utility of orthogonal collocation methods in the solution of optimal control problems relating to Formula One racing is demonstrated. These methods can be used to optimise driver controls such as the steering, braking and throttle usage, and to optimise vehicle parameters such as the aerodynamic down force and mass distributions. Of particular interest is the optimal usage of energy recovery systems (ERSs). Contemporary kinetic energy recovery systems are studied and compared with future hybrid kinetic and thermal/heat ERSs known as ERS-K and ERS-H, respectively. It is demonstrated that these systems, when properly controlled, can produce contemporary lap time using approximately two-thirds of the fuel required by earlier generation (2013 and prior) vehicles.
Di Paolo Emilio, M; Festuccia, R; Palladino, L
2015-09-01
In this work, the X-ray emission generated from a plasma produced by focusing Nd-YAG laser beam on the Mylar and Yttrium targets will be characterised. The goal is to reach the best condition that optimises the X-ray conversion efficiency at 500 eV (pre-edge of the Oxigen K-shell), strongly absorbed by carbon-based structures. The characteristics of the microbeam optical system, the software/hardware control and the preliminary measurements of the X-ray fluence will be presented. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
Laser-driven x-ray and neutron source development for industrial applications of plasma accelerators
NASA Astrophysics Data System (ADS)
Brenner, C. M.; Mirfayzi, S. R.; Rusby, D. R.; Armstrong, C.; Alejo, A.; Wilson, L. A.; Clarke, R.; Ahmed, H.; Butler, N. M. H.; Haddock, D.; Higginson, A.; McClymont, A.; Murphy, C.; Notley, M.; Oliver, P.; Allott, R.; Hernandez-Gomez, C.; Kar, S.; McKenna, P.; Neely, D.
2016-01-01
Pulsed beams of energetic x-rays and neutrons from intense laser interactions with solid foils are promising for applications where bright, small emission area sources, capable of multi-modal delivery are ideal. Possible end users of laser-driven multi-modal sources are those requiring advanced non-destructive inspection techniques in industry sectors of high value commerce such as aerospace, nuclear and advanced manufacturing. We report on experimental work that demonstrates multi-modal operation of high power laser-solid interactions for neutron and x-ray beam generation. Measurements and Monte Carlo radiation transport simulations show that neutron yield is increased by a factor ~2 when a 1 mm copper foil is placed behind a 2 mm lithium foil, compared to using a 2 cm block of lithium only. We explore x-ray generation with a 10 picosecond drive pulse in order to tailor the spectral content for radiography with medium density alloy metals. The impact of using >1 ps pulse duration on laser-accelerated electron beam generation and transport is discussed alongside the optimisation of subsequent bremsstrahlung emission in thin, high atomic number target foils. X-ray spectra are deconvolved from spectrometer measurements and simulation data generated using the GEANT4 Monte Carlo code. We also demonstrate the unique capability of laser-driven x-rays in being able to deliver single pulse high spatial resolution projection imaging of thick metallic objects. Active detector radiographic imaging of industrially relevant sample objects with a 10 ps drive pulse is presented for the first time, demonstrating that features of 200 μm size are resolved when projected at high magnification.
NASA Astrophysics Data System (ADS)
von Bergmann, Hubertus; Morkel, Francois; Stehmann, Timo
2015-02-01
Laser Ultrasonic Testing (UT) is an important technique for the non-destructive inspection of composite parts in the aerospace industry. In laser UT a high power, short pulse probe laser is scanned across the material surface, generating ultrasound waves which can be detected by a second low power laser system and are used to draw a defect map of the part. We report on the design and testing of a transversely excited atmospheric pressure (TEA) CO2 laser system specifically optimised for laser UT. The laser is excited by a novel solid-state switched pulsing system and utilises either spark or corona preionisation. It provides short output pulses of less than 100 ns at repetition rates of up to 1 kHz, optimised for efficient ultrasonic wave generation. The system has been designed for highly reliable operation under industrial conditions and a long term test with total pulse counts in excess of 5 billion laser pulses is reported.
Methling, Torsten; Armbrust, Nina; Haitz, Thilo; Speidel, Michael; Poboss, Norman; Braun-Unkhoff, Marina; Dieter, Heiko; Kempter-Regel, Brigitte; Kraaij, Gerard; Schliessmann, Ursula; Sterr, Yasemin; Wörner, Antje; Hirth, Thomas; Riedel, Uwe; Scheffknecht, Günter
2014-10-01
A new concept is proposed for combined fermentation (two-stage high-load fermenter) and gasification (two-stage fluidised bed gasifier with CO2 separation) of sewage sludge and wood, and the subsequent utilisation of the biogenic gases in a hybrid power plant, consisting of a solid oxide fuel cell and a gas turbine. The development and optimisation of the important processes of the new concept (fermentation, gasification, utilisation) are reported in detail. For the gas production, process parameters were experimentally and numerically investigated to achieve high conversion rates of biomass. For the product gas utilisation, important combustion properties (laminar flame speed, ignition delay time) were analysed numerically to evaluate machinery operation (reliability, emissions). Furthermore, the coupling of the processes was numerically analysed and optimised by means of integration of heat and mass flows. The high, simulated electrical efficiency of 42% including the conversion of raw biomass is promising for future power generation by biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.
IEEE 1982. Proceedings of the international conference on cybernetics and society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-01-01
The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.
Flexible Generation of Kalman Filter Code
NASA Technical Reports Server (NTRS)
Richardson, Julian; Wilson, Edward
2006-01-01
Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-01-01
Purpose: The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Materials and Methods: Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 23 full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Results: Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug–excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. Conclusion: It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease. PMID:23580933
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist.
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-10-01
The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 2(3) full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug-excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
Optimizing ATLAS code with different profilers
NASA Astrophysics Data System (ADS)
Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.
2014-06-01
After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.
Underworld results as a triple (shopping list, posterior, priors)
NASA Astrophysics Data System (ADS)
Quenette, S. M.; Moresi, L. N.; Abramson, D.
2013-12-01
When studying long-term lithosphere deformation and other such large-scale, spatially distinct and behaviour rich problems, there is a natural trade-off between the meaning of a model, the observations used to validate the model and the ability to compute over this space. For example, many models of varying lithologies, rheological properties and underlying physics may reasonably match (or not match) observables. To compound this problem, each realisation is computationally intensive, requiring high resolution, algorithm tuning and code tuning to contemporary computer hardware. It is often intractable to use sampling based assimilation methods, but with better optimisation, the window of tractability becomes wider. The ultimate goal is to find a sweet-spot where a formal assimilation method is used, and where a model affines to observations. Its natural to think of this as an inverse problem, in which the underlying physics may be fixed and the rheological properties and possibly the lithologies themselves are unknown. What happens when we push this approach and treat some portion of the underlying physics as an unknown? At its extreme this is an intractable problem. However, there is an analogy here with how we develop software for these scientific problems. What happens when we treat the changing part of a largely complete code as an unknown, where the changes are working towards this sweet-spot? When posed as a Bayesian inverse problem the result is a triple - the model changes, the real priors and the real posterior. Not only does this give meaning to the process by which a code changes, it forms a mathematical bridge from an inverse problem to compiler optimisations given such changes. As a stepping stone example we show a regional scale heat flow model with constraining observations, and the inverse process including increasingly complexity in the software. The implementation uses Underworld-GT (Underworld plus research extras to import geology and export geothermic measures, etc). Underworld uses StGermain an early (partial) implementation of the theories described here.
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
Speckle-based at-wavelength metrology of X-ray mirrors with super accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Yogesh; Wang, Hongchang; Sawhney, Kawal, E-mail: kawal.sawhney@diamond.ac.uk
2016-05-15
X-ray active mirrors, such as bimorph and mechanically bendable mirrors, are increasingly being used on beamlines at modern synchrotron source facilities to generate either focused or “tophat” beams. As well as optical tests in the metrology lab, it is becoming increasingly important to optimise and characterise active optics under actual beamline operating conditions. Recently developed X-ray speckle-based at-wavelength metrology technique has shown great potential. The technique has been established and further developed at the Diamond Light Source and is increasingly being used to optimise active mirrors. Details of the X-ray speckle-based at-wavelength metrology technique and an example of its applicabilitymore » in characterising and optimising a micro-focusing bimorph X-ray mirror are presented. Importantly, an unprecedented angular sensitivity in the range of two nanoradians for measuring the slope error of an optical surface has been demonstrated. Such a super precision metrology technique will be beneficial to the manufacturers of polished mirrors and also in optimization of beam shaping during experiments.« less
Ancient DNA sequence revealed by error-correcting codes.
Brandão, Marcelo M; Spoladore, Larissa; Faria, Luzinete C B; Rocha, Andréa S L; Silva-Filho, Marcio C; Palazzo, Reginaldo
2015-07-10
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code.
Ancient DNA sequence revealed by error-correcting codes
Brandão, Marcelo M.; Spoladore, Larissa; Faria, Luzinete C. B.; Rocha, Andréa S. L.; Silva-Filho, Marcio C.; Palazzo, Reginaldo
2015-01-01
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code. PMID:26159228
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
A time-dependent search for high-energy neutrinos from bright GRBs with ANTARES
NASA Astrophysics Data System (ADS)
Celli, Silvia
2017-03-01
Astrophysical point-like neutrino sources, like Gamma-Ray Bursts (GRBs), are one of the main targets for neutrino telescopes, since they are among the best candidates for Ultra-High-Energy Cosmic Ray (UHECR) acceleration. From the interaction between the accelerated protons and the intense radiation fields of the source jet, charged mesons are produced, which then decay into neutrinos. The methods and the results of a search for high-energy neutrinos in spatial and temporal correlation with the detected gamma-ray emission are presented for four bright GRBs observed between 2008 and 2013: a time-dependent analysis, optimised for each flare of the selected bursts, is performed to predict detailed neutrino spectra. The internal shock scenario of the fireball model is investigated, relying on the neutrino spectra computed through the numerical code NeuCosmA. The analysis is optimized on a per burst basis, through the maximization of the signal discovery probability. Since no events in ANTARES data passed the optimised cuts, 90% C.L. upper limits are derived on the expected neutrino fluences.
Optimised detection of mitochondrial DNA strand breaks.
Hanna, Rebecca; Crowther, Jonathan M; Bulsara, Pallav A; Wang, Xuying; Moore, David J; Birch-Machin, Mark A
2018-05-04
Intrinsic and extrinsic factors that induce cellular oxidative stress damage tissue integrity and promote ageing, resulting in accumulative strand breaks to the mitochondrial DNA (mtDNA) genome. Limited repair mechanisms and close proximity to superoxide generation make mtDNA a prominent biomarker of oxidative damage. Using human DNA we describe an optimised long-range qPCR methodology that sensitively detects mtDNA strand breaks relative to a suite of short mitochondrial and nuclear DNA housekeeping amplicons, which control for any variation in mtDNA copy number. An application is demonstrated by detecting 16-36-fold mtDNA damage in human skin cells induced by hydrogen peroxide and solar simulated radiation. Copyright © 2018 Elsevier B.V. and Mitochondria Research Society. All rights reserved.
Optimised design for a 1 kJ diode-pumped solid-state laser system
NASA Astrophysics Data System (ADS)
Mason, Paul D.; Ertel, Klaus; Banerjee, Saumyabrata; Phillips, P. Jonathan; Hernandez-Gomez, Cristina; Collier, John L.
2011-06-01
A conceptual design for a kJ-class diode-pumped solid-state laser (DPSSL) system based on cryogenic gas-cooled multislab ceramic Yb:YAG amplifier technology has been developed at the STFC as a building block towards a MJ-class source for inertial fusion energy (IFE) projects such as HiPER. In this paper, we present an overview of an amplifier design optimised for efficient generation of 1 kJ nanosecond pulses at 10 Hz repetition rate. In order to confirm the viability of this technology, a prototype version of this amplifier scaled to deliver 10 J at 10 Hz, DiPOLE, is under development at the Central Laser Facility. A progress update on the status of this system is also presented.
NASA Astrophysics Data System (ADS)
Jones, Adam; Utyuzhnikov, Sergey
2017-08-01
Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.
Olvera-García, Myrna; Sanchez-Flores, Alejandro; Quirasco Baruch, Maricarmen
2018-03-01
Enterococcus spp. are present in the native microbiota of many traditional fermented foods. Their ability to produce antibacterial compounds, mainly against Listeria monocytogenes, has raised interest recently. However, there is scarce information about their proteolytic and lipolytic potential, and their biotechnological application is currently limited because enterococcal strains have been related to nosocomial infections. In this work, next-generation sequencing and optimised bioinformatic pipelines were used to annotate the genomes of two Enterococcus strains-one E. faecium and one E. faecalis-isolated from the Mexican artisanal ripened Cotija cheese. A battery of genes involved in their proteolytic system was annotated. Genes coding for lipases, esterases and other enzymes whose final products contribute to cheese aroma and flavour were identified as well. As for the production of antibacterial compounds, several peptidoglycan hydrolase- and bacteriocin-coding genes were identified in both genomes experimentally and by bioinformatic analyses. E. faecalis showed resistance to aminoglycosides and E. faecium to aminoglycosides and macrolides, as predicted by the genome functional annotation. No pathogenicity islands were found in any of the strains, although traits such as the ability of biofilm formation and cell aggregation were observed. Finally, a comparative genomic analysis was able to discriminate between the food strains isolated and nosocomial strains. In summary, pathogenic strains are resistant to a wide range of antibiotics and contain virulence factors that cause host damage; in contrast, food strains display less antibiotic resistance, include genes that encode class II bacteriocins and express virulence factors associated with host colonisation rather than invasion.
Auto Code Generation for Simulink-Based Attitude Determination Control System
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.
Advancements in OSeMOSYS - the Open Source energy MOdelling SYStem
NASA Astrophysics Data System (ADS)
Gardumi, Francesco; Almulla, Youssef; Shivakumar, Abhishek; Taliotis, Constantinos; Howells, Mark
2017-04-01
This work provides a review of the latest developments and applications of OSeMOSYS energy systems model generator. OSeMOSYS was launched at Oxford university in 2011, including co-authors from UCL, UNIDO, UCT, Stanford, PSI and other institutions. It was designed to fill a gap in the energy modelling toolkit, where no open source optimising model generators were available at the time. OSeMOSYS is free, open source and accessible. Written in GNU MathProg programming language, it can generate from small village energy models up to global multi-resource integrated - Climate, Land, Energy, Water - models. In its most widespread version it calculates what investments to make, when, at what capacity and how to operate them, to meet given final demands and policy targets at the lowest cost. OSeMOSYS is structured into blocks of functionalities, each consisting in a stand-alone set of equations which can be plugged into the core code to add specific insights for the case-study of interest. Originally, seven blocks of functionalities for the objective function, costs, storage, capacity adequacy, energy balance, constraints, emissions were provided, documented by plain English descriptions and algebraic formulations. Recently, the block for storage was deeply revised and developed, while new blocks of functionality for studying short-term implications of energy planning onto the electricity system were designed. These include equations for computing 1) the reserve capacity dispatch; 2) the costs of flexible operation of power plants and 3) the reserve capacity demand as a function of the penetration of intermittent renewables were introduced. Additionally, a revision of the whole code was completed, as the result of a public call launched and led by UNite Ideas. This allowed the computational time to be greatly reduced and opened up the path to refinements of the scales of analysis. Finally, the code was made available in Python and GAMS programming languages, thus engaging two of the widest existing communities of programmers. Such developments allowed a number of applications to be produced at different scales. Regional and country models were generated for the whole of South America and Sub-Saharan Africa. A Pan-European model is under development. Models of Cyprus and Tunisia, detailed down to the individual power plant, are among the latest applications. Finally, integrated assessment water-energy models have been generated for regions in Central Asia and the Balkans, in the framework of the UNECE Water Convention. These look into trans-boundary issues related to the water and energy management along river basins, including detailed representations of water storage and cascading power plants. This multiplicity of developments and applications of OSeMOSYS engages a wide community of users and decision-makers and fosters the use of modelling tools for energy planning. This fulfils a scientific and social mission to empower communities with the development of solutions for a better access to energy.
One-way quantum repeaters with quantum Reed-Solomon codes
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang
2018-05-01
We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
NASA Astrophysics Data System (ADS)
Filippone, Antonio
2014-07-01
This contribution addresses the state-of-the-art in the field of aircraft noise prediction, simulation and minimisation. The point of view taken in this context is that of comprehensive models that couple the various aircraft systems with the acoustic sources, the propagation and the flight trajectories. After an exhaustive review of the present predictive technologies in the relevant fields (airframe, propulsion, propagation, aircraft operations, trajectory optimisation), the paper addresses items for further research and development. Examples are shown for several airplanes, including the Airbus A319-100 (CFM engines), the Bombardier Dash8-Q400 (PW150 engines, Dowty R408 propellers) and the Boeing B737-800 (CFM engines). Predictions are done with the flight mechanics code FLIGHT. The transfer function between flight mechanics and the noise prediction is discussed in some details, along with the numerical procedures for validation and verification. Some code-to-code comparisons are shown. It is contended that the field of aircraft noise prediction has not yet reached a sufficient level of maturity. In particular, some parametric effects cannot be investigated, issues of accuracy are not currently addressed, and validation standards are still lacking.
X-ray backscatter radiography with lower open fraction coded masks
NASA Astrophysics Data System (ADS)
Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David
2017-09-01
Single sided radiographic imaging would find great utility for medical, aerospace and security applications. While coded apertures can be used to form such an image from backscattered X-rays they suffer from near field limitations that introduce noise. Several theoretical studies have indicated that for an extended source the images signal to noise ratio may be optimised by using a low open fraction (<0.5) mask. However, few experimental results have been published for such low open fraction patterns and details of their formulation are often unavailable or are ambiguous. In this paper we address this process for two types of low open fraction mask, the dilute URA and the Singer set array. For the dilute URA the procedure for producing multiple 2D array patterns from given 1D binary sequences (Barker codes) is explained. Their point spread functions are calculated and their imaging properties are critically reviewed. These results are then compared to those from the Singer set and experimental exposures are presented for both type of pattern; their prospects for near field imaging are discussed.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
Automatic Certification of Kalman Filters for Reliable Code Generation
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian
2005-01-01
AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.
NASA Astrophysics Data System (ADS)
Poikselkä, Katja; Leinonen, Mikko; Palosaari, Jaakko; Vallivaara, Ilari; Röning, Juha; Juuti, Jari
2017-09-01
This paper introduces a new type of piezoelectric actuator, Mikbal. The Mikbal was developed from a Cymbal by adding steel structures around the steel cap to increase displacement and reduce the amount of piezoelectric material used. Here the parameters of the steel cap of Mikbal and Cymbal actuators were optimised by using genetic algorithms in combination with Comsol Multiphysics FEM modelling software. The blocking force of the actuator was maximised for different values of displacement by optimising the height and the top diameter of the end cap profile so that their effect on displacement, blocking force and stresses could be analysed. The optimisation process was done for five Mikbal- and two Cymbal-type actuators with different diameters varying between 15 and 40 mm. A Mikbal with a Ø 25 mm piezoceramic disc and a Ø 40 mm steel end cap was produced and the performances of unclamped measured and modelled cases were found to correspond within 2.8% accuracy. With a piezoelectric disc of Ø 25 mm, the Mikbal created 72% greater displacement while blocking force was decreased 57% compared with a Cymbal with the same size disc. Even with a Ø 20 mm piezoelectric disc, the Mikbal was able to generate ∼10% higher displacement than a Ø 25 mm Cymbal. Thus, the introduced Mikbal structure presents a way to extend the displacement capabilities of a conventional Cymbal actuator for low-to-moderate force applications.
Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.
Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin
2014-02-01
Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.
Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
PCC Framework for Program-Generators
NASA Technical Reports Server (NTRS)
Kong, Soonho; Choi, Wontae; Yi, Kwangkeun
2009-01-01
In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.
Incorporating Manual and Autonomous Code Generation
NASA Technical Reports Server (NTRS)
McComas, David
1998-01-01
Code can be generated manually or using code-generated software tools, but how do you interpret the two? This article looks at a design methodology that combines object-oriented design with autonomic code generation for attitude control flight software. Recent improvements in space flight computers are allowing software engineers to spend more time engineering the applications software. The application developed was the attitude control flight software for an astronomical satellite called the Microwave Anisotropy Probe (MAP). The MAP flight system is being designed, developed, and integrated at NASA's Goddard Space Flight Center. The MAP controls engineers are using Integrated Systems Inc.'s MATRIXx for their controls analysis. In addition to providing a graphical analysis for an environment, MATRIXx includes an autonomic code generation facility called AutoCode. This article examines the forces that shaped the final design and describes three highlights of the design process: (1) Defining the manual to autonomic code interface; (2) Applying object-oriented design to the manual flight code; (3) Implementing the object-oriented design in C.
Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems
NASA Astrophysics Data System (ADS)
Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.
2008-08-01
This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.
Zhang, Fangzheng; Ge, Xiaozhong; Gao, Bindong; Pan, Shilong
2015-08-24
A novel scheme for photonic generation of a phase-coded microwave signal is proposed and its application in one-dimension distance measurement is demonstrated. The proposed signal generator has a simple and compact structure based on a single dual-polarization modulator. Besides, the generated phase-coded signal is stable and free from the DC and low-frequency backgrounds. An experiment is carried out. A 2 Gb/s phase-coded signal at 20 GHz is successfully generated, and the recovered phase information agrees well with the input 13-bit Barker code. To further investigate the performance of the proposed signal generator, its application in one-dimension distance measurement is demonstrated. The measurement accuracy is less than 1.7 centimeters within a measurement range of ~2 meters. The experimental results can verify the feasibility of the proposed phase-coded microwave signal generator and also provide strong evidence to support its practical applications.
NASA Astrophysics Data System (ADS)
Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele
2017-09-01
The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
Selecting a climate model subset to optimise key ensemble properties
NASA Astrophysics Data System (ADS)
Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.
2018-02-01
End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.
New GOES satellite synchronized time code generation
NASA Technical Reports Server (NTRS)
Fossler, D. E.; Olson, R. K.
1984-01-01
The TRAK Systems' GOES Satellite Synchronized Time Code Generator is described. TRAK Systems has developed this timing instrument to supply improved accuracy over most existing GOES receiver clocks. A classical time code generator is integrated with a GOES receiver.
Nicholson, Amanda; Ford, Elizabeth; Davies, Kevin A.; Smith, Helen E.; Rait, Greta; Tate, A. Rosemary; Petersen, Irene; Cassell, Jackie
2013-01-01
Background Research using electronic health records (EHRs) relies heavily on coded clinical data. Due to variation in coding practices, it can be difficult to aggregate the codes for a condition in order to define cases. This paper describes a methodology to develop ‘indicator markers’ found in patients with early rheumatoid arthritis (RA); these are a broader range of codes which may allow a probabilistic case definition to use in cases where no diagnostic code is yet recorded. Methods We examined EHRs of 5,843 patients in the General Practice Research Database, aged ≥30y, with a first coded diagnosis of RA between 2005 and 2008. Lists of indicator markers for RA were developed initially by panels of clinicians drawing up code-lists and then modified based on scrutiny of available data. The prevalence of indicator markers, and their temporal relationship to RA codes, was examined in patients from 3y before to 14d after recorded RA diagnosis. Findings Indicator markers were common throughout EHRs of RA patients, with 83.5% having 2 or more markers. 34% of patients received a disease-specific prescription before RA was coded; 42% had a referral to rheumatology, and 63% had a test for rheumatoid factor. 65% had at least one joint symptom or sign recorded and in 44% this was at least 6-months before recorded RA diagnosis. Conclusion Indicator markers of RA may be valuable for case definition in cases which do not yet have a diagnostic code. The clinical diagnosis of RA is likely to occur some months before it is coded, shown by markers frequently occurring ≥6 months before recorded diagnosis. It is difficult to differentiate delay in diagnosis from delay in recording. Information concealed in free text may be required for the accurate identification of patients and to assess the quality of care in general practice. PMID:23451024
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
Lévy flight artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She
2016-08-01
Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
Method and apparatus for determining position using global positioning satellites
NASA Technical Reports Server (NTRS)
Ward, John (Inventor); Ward, William S. (Inventor)
1998-01-01
A global positioning satellite receiver having an antenna for receiving a L1 signal from a satellite. The L1 signal is processed by a preamplifier stage including a band pass filter and a low noise amplifier and output as a radio frequency (RF) signal. A mixer receives and de-spreads the RF signal in response to a pseudo-random noise code, i.e., Gold code, generated by an internal pseudo-random noise code generator. A microprocessor enters a code tracking loop, such that during the code tracking loop, it addresses the pseudo-random code generator to cause the pseudo-random code generator to sequentially output pseudo-random codes corresponding to satellite codes used to spread the L1 signal, until correlation occurs. When an output of the mixer is indicative of the occurrence of correlation between the RF signal and the generated pseudo-random codes, the microprocessor enters an operational state which slows the receiver code sequence to stay locked with the satellite code sequence. The output of the mixer is provided to a detector which, in turn, controls certain routines of the microprocessor. The microprocessor will output pseudo range information according to an interrupt routine in response detection of correlation. The pseudo range information is to be telemetered to a ground station which determines the position of the global positioning satellite receiver.
Wittevrongel, Benjamin; Van Wolputte, Elia; Van Hulle, Marc M
2017-11-08
When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer's occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.
Automated apparatus and method of generating native code for a stitching machine
NASA Technical Reports Server (NTRS)
Miller, Jeffrey L. (Inventor)
2000-01-01
A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.
ESAS Deliverable PS 1.1.2.3: Customer Survey on Code Generations in Safety-Critical Applications
NASA Technical Reports Server (NTRS)
Schumann, Johann; Denney, Ewen
2006-01-01
Automated code generators (ACG) are tools that convert a (higher-level) model of a software (sub-)system into executable code without the necessity for a developer to actually implement the code. Although both commercially supported and in-house tools have been used in many industrial applications, little data exists on how these tools are used in safety-critical domains (e.g., spacecraft, aircraft, automotive, nuclear). The aims of the survey, therefore, were threefold: 1) to determine if code generation is primarily used as a tool for prototyping, including design exploration and simulation, or for fiight/production code; 2) to determine the verification issues with code generators relating, in particular, to qualification and certification in safety-critical domains; and 3) to determine perceived gaps in functionality of existing tools.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
HOPE: A Python just-in-time compiler for astrophysical computations
NASA Astrophysics Data System (ADS)
Akeret, J.; Gamper, L.; Amara, A.; Refregier, A.
2015-04-01
The Python programming language is becoming increasingly popular for scientific applications due to its simplicity, versatility, and the broad range of its libraries. A drawback of this dynamic language, however, is its low runtime performance which limits its applicability for large simulations and for the analysis of large data sets, as is common in astrophysics and cosmology. While various frameworks have been developed to address this limitation, most focus on covering the complete language set, and either force the user to alter the code or are not able to reach the full speed of an optimised native compiled language. In order to combine the ease of Python and the speed of C++, we developed HOPE, a specialised Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimisation on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. We assess the performance of HOPE by performing a series of benchmarks and compare its execution speed with that of plain Python, C++ and the other existing frameworks. We find that HOPE improves the performance compared to plain Python by a factor of 2 to 120, achieves speeds comparable to that of C++, and often exceeds the speed of the existing solutions. We discuss the differences between HOPE and the other frameworks, as well as future extensions of its capabilities. The fully documented HOPE package is available at http://hope.phys.ethz.ch and is published under the GPLv3 license on PyPI and GitHub.
NASA Astrophysics Data System (ADS)
Tsujimura, T., Ii; Kubo, S.; Takahashi, H.; Makino, R.; Seki, R.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Ida, K.; Suzuki, C.; Emoto, M.; Yokoyama, M.; Kobayashi, T.; Moon, C.; Nagaoka, K.; Osakabe, M.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Okada, K.; Ejiri, A.; Mutoh, T.
2015-11-01
The central electron temperature has successfully reached up to 7.5 keV in large helical device (LHD) plasmas with a central high-ion temperature of 5 keV and a central electron density of 1.3× {{10}19} m-3. This result was obtained by heating with a newly-installed 154 GHz gyrotron and also the optimisation of injection geometry in electron cyclotron heating (ECH). The optimisation was carried out by using the ray-tracing code ‘LHDGauss’, which was upgraded to include the rapid post-processing three-dimensional (3D) equilibrium mapping obtained from experiments. For ray-tracing calculations, LHDGauss can automatically read the relevant data registered in the LHD database after a discharge, such as ECH injection settings (e.g. Gaussian beam parameters, target positions, polarisation and ECH power) and Thomson scattering diagnostic data along with the 3D equilibrium mapping data. The equilibrium map of the electron density and temperature profiles are then extrapolated into the region outside the last closed flux surface. Mode purity, or the ratio between the ordinary mode and the extraordinary mode, is obtained by calculating the 1D full-wave equation along the direction of the rays from the antenna to the absorption target point. Using the virtual magnetic flux surfaces, the effects of the modelled density profiles and the magnetic shear at the peripheral region with a given polarisation are taken into account. Power deposition profiles calculated for each Thomson scattering measurement timing are registered in the LHD database. The adjustment of the injection settings for the desired deposition profile from the feedback provided on a shot-by-shot basis resulted in an effective experimental procedure.
Chin, N; Perera, P; Roberts, A; Nagappan, R
2013-07-01
Accurate and comprehensive clinical documentation is crucial for effective ongoing patient care, follow up and to optimise case mix-based funding. Each Diagnostic Related Group (DRG) is assigned a 'weight', leading to Weighted Inlier Equivalent Separation (WIES), a system many public and private hospitals in Australia subscribe to. To identify the top DRG in a general medical inpatient service, the completeness of medical discharge documentation, commonly missed comorbidities and system-related issues and subsequent impact on DRG and WIES allocation. One hundred and fifty completed discharge summaries were randomly selected from the top 10 medical DRG in our health service. From a detailed review of the clinical documentation, principal diagnoses, associated comorbidities and complications, where appropriate, the DRG and WIES were modified. Seventy-two (48%) of the 150 reviewed admissions resulted in a revision of DRG and WIES equivalent to an increase of AUD 142,000. Respiratory-based DRG generated the largest revision of DRG and WIES, while 'Cellulitis' DRG had the largest relative change. Twenty-seven per cent of summaries reviewed necessitated a change in coding with no subsequent change in DRG allocation or WIES. Acute renal failure, anaemia and electrolyte disturbances were the most commonly underrepresented entities in clinical discharge documentation. Seven patients had their WIES downgraded. Comprehensive documentation of principal diagnosis/diagnoses, comorbidities and their complications is imperative to optimal DRG and WIES allocation. Regular meetings between clinical and coding staff improve the quality and timeliness of medical documentation, ensure adequate communication with general practitioners and lead to appropriate funding. © 2013 The Authors; Internal Medicine Journal © 2013 Royal Australasian College of Physicians.
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
2013-01-01
Background Primary care databases are a major source of data for epidemiological and health services research. However, most studies are based on coded information, ignoring information stored in free text. Using the early presentation of rheumatoid arthritis (RA) as an exemplar, our objective was to estimate the extent of data hidden within free text, using a keyword search. Methods We examined the electronic health records (EHRs) of 6,387 patients from the UK, aged 30 years and older, with a first coded diagnosis of RA between 2005 and 2008. We listed indicators for RA which were present in coded format and ran keyword searches for similar information held in free text. The frequency of indicator code groups and keywords from one year before to 14 days after RA diagnosis were compared, and temporal relationships examined. Results One or more keyword for RA was found in the free text in 29% of patients prior to the RA diagnostic code. Keywords for inflammatory arthritis diagnoses were present for 14% of patients whereas only 11% had a diagnostic code. Codes for synovitis were found in 3% of patients, but keywords were identified in an additional 17%. In 13% of patients there was evidence of a positive rheumatoid factor test in text only, uncoded. No gender differences were found. Keywords generally occurred close in time to the coded diagnosis of rheumatoid arthritis. They were often found under codes indicating letters and communications. Conclusions Potential cases may be missed or wrongly dated when coded data alone are used to identify patients with RA, as diagnostic suspicions are frequently confined to text. The use of EHRs to create disease registers or assess quality of care will be misleading if free text information is not taken into account. Methods to facilitate the automated processing of text need to be developed and implemented. PMID:23964710
Multi-agent modelling framework for water, energy and other resource networks
NASA Astrophysics Data System (ADS)
Knox, S.; Selby, P. D.; Meier, P.; Harou, J. J.; Yoon, J.; Lachaut, T.; Klassert, C. J. A.; Avisse, N.; Mohamed, K.; Tomlinson, J.; Khadem, M.; Tilmant, A.; Gorelick, S.
2015-12-01
Bespoke modelling tools are often needed when planning future engineered interventions in the context of various climate, socio-economic and geopolitical futures. Such tools can help improve system operating policies or assess infrastructure upgrades and their risks. A frequently used approach is to simulate and/or optimise the impact of interventions in engineered systems. Modelling complex infrastructure systems can involve incorporating multiple aspects into a single model, for example physical, economic and political. This presents the challenge of combining research from diverse areas into a single system effectively. We present the Pynsim 'Python Network Simulator' framework, a library for building simulation models capable of representing, the physical, institutional and economic aspects of an engineered resources system. Pynsim is an open source, object oriented code aiming to promote integration of different modelling processes through a single code library. We present two case studies that demonstrate important features of Pynsim's design. The first is a large interdisciplinary project of a national water system in the Middle East with modellers from fields including water resources, economics, hydrology and geography each considering different facets of a multi agent system. It includes: modelling water supply and demand for households and farms; a water tanker market with transfer of water between farms and households, and policy decisions made by government institutions at district, national and international level. This study demonstrates that a well-structured library of code can provide a hub for development and act as a catalyst for integrating models. The second focuses on optimising the location of new run-of-river hydropower plants. Using a multi-objective evolutionary algorithm, this study analyses different network configurations to identify the optimal placement of new power plants within a river network. This demonstrates that Pynsim can be used to evaluate a multitude of topologies for identifying the optimal location of infrastructure investments. Pynsim is available on GitHub or via standard python installer packages such as pip. It comes with several examples and online documentation, making it attractive for those less experienced in software engineering.
Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.
Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella
2014-11-03
Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.
Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D
NASA Technical Reports Server (NTRS)
Carle, Alan; Fagan, Mike; Green, Lawrence L.
1998-01-01
This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.
Jamshidi, N; Rostami, M; Najarian, S; Menhaj, M B; Saadatnia, M; Firooz, S
2009-04-01
This paper deals with the dynamic modelling of human walking. The main focus of this research was to optimise the function of the orthosis in patients with neuropathic feet, based on the kinematics data from different categories of neuropathic patients. The patient's body on the sagittal plane was modelled for calculating the torques generated in joints. The kinematics data required for mathematical modelling of the patients were obtained from the films of patients captured by high speed camera, and then the films were analysed through a motion analysis software. An inverse dynamic model was used for estimating the spring coefficient. In our dynamic model, the role of muscles was substituted by adding a spring-damper between the shank and ankle that could compensate for their weakness by designing ankle-foot orthoses based on the kinematics data obtained from the patients. The torque generated in the ankle was varied by changing the spring constant. Therefore, it was possible to decrease the torque generated in muscles which could lead to the design of more comfortable and efficient orthoses. In this research, unlike previous research activities, instead of studying the abnormal gait or modelling the ankle-foot orthosis separately, the function of the ankle-foot orthosis on the abnormal gait has been quantitatively improved through a correction of the torque.
The tensor network theory library
NASA Astrophysics Data System (ADS)
Al-Assam, S.; Clark, S. R.; Jaksch, D.
2017-09-01
In this technical paper we introduce the tensor network theory (TNT) library—an open-source software project aimed at providing a platform for rapidly developing robust, easy to use and highly optimised code for TNT calculations. The objectives of this paper are (i) to give an overview of the structure of TNT library, and (ii) to help scientists decide whether to use the TNT library in their research. We show how to employ the TNT routines by giving examples of ground-state and dynamical calculations of one-dimensional bosonic lattice system. We also discuss different options for gaining access to the software available at www.tensornetworktheory.org.
Conversion of the agent-oriented domain-specific language ALAS into JavaScript
NASA Astrophysics Data System (ADS)
Sredojević, Dejan; Vidaković, Milan; Okanović, Dušan; Mitrović, Dejan; Ivanović, Mirjana
2016-06-01
This paper shows generation of JavaScript code from code written in agent-oriented domain-specific language ALAS. ALAS is an agent-oriented domain-specific language for writing software agents that are executed within XJAF middleware. Since the agents can be executed on various platforms, they must be converted into a language of the target platform. We also try to utilize existing tools and technologies to make the whole conversion process as simple as possible, as well as faster and more efficient. We use the Xtext framework that is compatible with Java to implement ALAS infrastructure - editor and code generator. Since Xtext supports Java, generation of Java code from ALAS code is straightforward. To generate a JavaScript code that will be executed within the target JavaScript XJAF implementation, Google Web Toolkit (GWT) is used.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Comparison of theoretical and flight-measured local flow aerodynamics for a low-aspect-ratio fin
NASA Technical Reports Server (NTRS)
Johnson, J. B.; Sandlin, D. R.
1984-01-01
Flight test and theoretical aerodynamic data were obtained for a flight test fixture mounted on the underside of an F-104G aircraft. The theoretical data were generated using two codes, a two dimensional transonic code called Code H, and a three dimensional subsonic and supersonic code call wing-body. Pressure distributions generated by the codes for the flight test fixture as well as boundary layer displacement thickness generated by the two dimensional code were compared to the flight test data. The two dimensional code pressure distributions compared well except at the minimum pressure point and trailing edge. Shock locations compared well except at high transonic speeds. The three dimensional code pressure distributions compared well except at the trailing edge of the flight test fixture. The two dimensional code does not predict displacement thickness of the flight test fixture well.
Generating Customized Verifiers for Automatically Generated Code
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2008-01-01
Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.
NASA Astrophysics Data System (ADS)
Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique
2017-05-01
Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.
NASA Astrophysics Data System (ADS)
Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.
2016-03-01
Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.
NASA Astrophysics Data System (ADS)
Hurford, Anthony; Harou, Julien
2015-04-01
Climate change has challenged conventional methods of planning water resources infrastructure investment, relying on stationarity of time-series data. It is not clear how to best use projections of future climatic conditions. Many-objective simulation-optimisation and trade-off analysis using evolutionary algorithms has been proposed as an approach to addressing complex planning problems with multiple conflicting objectives. The search for promising assets and policies can be carried out across a range of climate projections, to identify the configurations of infrastructure investment shown by model simulation to be robust under diverse future conditions. Climate projections can be used in different ways within a simulation model to represent the range of possible future conditions and understand how optimal investments vary according to the different hydrological conditions. We compare two approaches, optimising over an ensemble of different 20-year flow and PET timeseries projections, and separately for individual future scenarios built synthetically from the original ensemble. Comparing trade-off curves and surfaces generated by the two approaches helps understand the limits and benefits of optimising under different sets of conditions. The comparison is made for the Tana Basin in Kenya, where climate change combined with multiple conflicting objectives of water management and infrastructure investment mean decision-making is particularly challenging.
Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation
NASA Astrophysics Data System (ADS)
Sleesongsom, S.; Bureerat, S.
2018-03-01
This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.
Research on Automatic Programming
1975-12-31
Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is
McCoy, Gary R; Touzet, Nicolas; Fleming, Gerard T A; Raine, Robin
2015-07-01
The toxic microalgal species Prymnesium parvum and Prymnesium polylepis are responsible for numerous fish kills causing economic stress on the aquaculture industry and, through the consumption of contaminated shellfish, can potentially impact on human health. Monitoring of toxic phytoplankton is traditionally carried out by light microscopy. However, molecular methods of identification and quantification are becoming more common place. This study documents the optimisation of the novel Microarrays for the Detection of Toxic Algae (MIDTAL) microarray from its initial stages to the final commercial version now available from Microbia Environnement (France). Existing oligonucleotide probes used in whole-cell fluorescent in situ hybridisation (FISH) for Prymnesium species from higher group probes to species-level probes were adapted and tested on the first-generation microarray. The combination and interaction of numerous other probes specific for a whole range of phytoplankton taxa also spotted on the chip surface caused high cross reactivity, resulting in false-positive results on the microarray. The probe sequences were extended for the subsequent second-generation microarray, and further adaptations of the hybridisation protocol and incubation temperatures significantly reduced false-positive readings from the first to the second-generation chip, thereby increasing the specificity of the MIDTAL microarray. Additional refinement of the subsequent third-generation microarray protocols with the addition of a poly-T amino linker to the 5' end of each probe further enhanced the microarray performance but also highlighted the importance of optimising RNA labelling efficiency when testing with natural seawater samples from Killary Harbour, Ireland.
Secure ADS-B authentication system and method
NASA Technical Reports Server (NTRS)
Viggiano, Marc J (Inventor); Valovage, Edward M (Inventor); Samuelson, Kenneth B (Inventor); Hall, Dana L (Inventor)
2010-01-01
A secure system for authenticating the identity of ADS-B systems, including: an authenticator, including a unique id generator and a transmitter transmitting the unique id to one or more ADS-B transmitters; one or more ADS-B transmitters, including a receiver receiving the unique id, one or more secure processing stages merging the unique id with the ADS-B transmitter's identification, data and secret key and generating a secure code identification and a transmitter transmitting a response containing the secure code and ADSB transmitter's data to the authenticator; the authenticator including means for independently determining each ADS-B transmitter's secret key, a receiver receiving each ADS-B transmitter's response, one or more secure processing stages merging the unique id, ADS-B transmitter's identification and data and generating a secure code, and comparison processing comparing the authenticator-generated secure code and the ADS-B transmitter-generated secure code and providing an authentication signal based on the comparison result.
Object-oriented design and programming in medical decision support.
Heathfield, H; Armstrong, J; Kirkham, N
1991-12-01
The concept of object-oriented design and programming has recently received a great deal of attention from the software engineering community. This paper highlights the realisable benefits of using the object-oriented approach in the design and development of clinical decision support systems. These systems seek to build a computational model of some problem domain and therefore tend to be exploratory in nature. Conventional procedural design techniques do not support either the process of model building or rapid prototyping. The central concepts of the object-oriented paradigm are introduced, namely encapsulation, inheritance and polymorphism, and their use illustrated in a case study, taken from the domain of breast histopathology. In particular, the dual roles of inheritance in object-oriented programming are examined, i.e., inheritance as a conceptual modelling tool and inheritance as a code reuse mechanism. It is argued that the use of the former is not entirely intuitive and may be difficult to incorporate into the design process. However, inheritance as a means of optimising code reuse offers substantial technical benefits.
Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters
NASA Astrophysics Data System (ADS)
Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.
NASA Astrophysics Data System (ADS)
Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.
2017-09-01
In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.
2016-06-01
Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.
Computational optimisation of targeted DNA sequencing for cancer detection
NASA Astrophysics Data System (ADS)
Martinez, Pierre; McGranahan, Nicholas; Birkbak, Nicolai Juul; Gerlinger, Marco; Swanton, Charles
2013-12-01
Despite recent progress thanks to next-generation sequencing technologies, personalised cancer medicine is still hampered by intra-tumour heterogeneity and drug resistance. As most patients with advanced metastatic disease face poor survival, there is need to improve early diagnosis. Analysing circulating tumour DNA (ctDNA) might represent a non-invasive method to detect mutations in patients, facilitating early detection. In this article, we define reduced gene panels from publicly available datasets as a first step to assess and optimise the potential of targeted ctDNA scans for early tumour detection. Dividing 4,467 samples into one discovery and two independent validation cohorts, we show that up to 76% of 10 cancer types harbour at least one mutation in a panel of only 25 genes, with high sensitivity across most tumour types. Our analyses demonstrate that targeting ``hotspot'' regions would introduce biases towards in-frame mutations and would compromise the reproducibility of tumour detection.
Measuring the impact of long-term medicines use from the patient perspective.
Krska, Janet; Morecroft, Charles W; Rowe, Philip H; Poole, Helen
2014-08-01
Polypharmacy is increasing, seemingly inexorably, and inevitably the associated difficulties for individual patients of coping with multiple medicines rise with it. Using medicines is one aspect of the burden associated with living with a chronic condition. It is becoming increasingly important to measure this burden particularly that relating to multiple long-term medicines. Pharmacists and other health professionals provide a myriad of services designed to optimise medicines use, ostensibly aiming to help and support patients, but in reality many such services focus on the medicines, and seek to improve adherence rather than reducing the burden for the patient. We believe that the patient perspective and experience of medicines use is fundamental to medicines optimisation and have developed an instrument which begins to quantify these experiences. The instrument, the Living with Medicines Questionnaire, was generated using qualitative findings with patients, to reflect their perspective. Further development is ongoing, involving researchers in multiple countries.
Heterologous expression of Aspergillus terreus fructosyltransferase in Kluyveromyces lactis.
Spohner, Sebastian C; Czermak, Peter
2016-06-25
Fructo-oligosaccharides are prebiotic and hypocaloric sweeteners that are usually extracted from chicory. They can also be produced from sucrose using fructosyltransferases, but the only commercial enzyme suitable for this purpose is Pectinex Ultra, which is produced with Aspergillus aculeatus. Here we used the yeast Kluyveromyces lactis to express a secreted recombinant fructosyltransferase from the inulin-producing fungus Aspergillus terreus. A synthetic codon-optimised version of the putative β-fructofuranosidase ATEG 04996 (XP 001214174.1) from A. terreus NIH2624 was secreted as a functional protein into the extracellular medium. At 60°C, the purified A. terreus enzyme generated the same pattern of oligosaccharides as Pectinex Ultra, but at lower temperatures it also produced oligomers with up to seven units. We achieved activities of up to 986.4U/mL in high-level expression experiments, which is better than previous reports of optimised Aspergillus spp. fermentations. Copyright © 2016 Elsevier B.V. All rights reserved.
Important considerations about nursing intelligence and information systems.
Ballard, E C
1997-01-01
This discussion focuses on the importance of nursing intelligence to the organisation, and the nurses' role in gathering and utilising such intelligence. Deliberations with professional colleagues suggest that intelligence can only be utilised fully when the information systems are developed in such a way as to meet the needs of the people who manage and provide nursing care at the consumer level; that is, the activity of nursing itself. If accommodation is made for the recycling of nursing intelligence, there would be a support and furtherance of 'professional' intelligence. Two main issues emerge: how can nurses support the needs of management to optimise intelligence input? how can organisations optimise the contribution of nurses to its information processes and interpretation of intelligence? The expansion of this 'professional' intelligence would promote a generation of constantly reviewed data, offering a quality approach to nursing activities and an organisation's intelligence system.
Fluid Mechanics Optimising Organic Synthesis
NASA Astrophysics Data System (ADS)
Leivadarou, Evgenia; Dalziel, Stuart
2015-11-01
The Vortex Fluidic Device (VFD) is a new ``green'' approach in the synthesis of organic chemicals with many industrial applications in biodiesel generation, cosmetics, protein folding and pharmaceutical production. The VFD is a rapidly rotating tube that can operate with a jet feeding drops of liquid reactants to the base of the tube. The aim of this project is to explain the fluid mechanics of the VFD that influence the rate of reactions. The reaction rate is intimately related to the intense shearing that promotes collision between reactant molecules. In the VFD, the highest shears are found at the bottom of the tube in the Rayleigh and the Ekman layer and at the walls in the Stewardson layers. As a step towards optimising the performance of the VFD we present experiments conducted in order to establish the minimum drop volume and maximum rotation rate for maximum axisymmetric spreading without fingering instability. PhD candidate, Department of Applied Mathematics and Theoretical Physics.
Shape and energy consistent pseudopotentials for correlated electron systems
Needs, R. J.
2017-01-01
A method is developed for generating pseudopotentials for use in correlated-electron calculations. The paradigms of shape and energy consistency are combined and defined in terms of correlated-electron wave-functions. The resulting energy consistent correlated electron pseudopotentials (eCEPPs) are constructed for H, Li–F, Sc–Fe, and Cu. Their accuracy is quantified by comparing the relaxed molecular geometries and dissociation energies which they provide with all electron results, with all quantities evaluated using coupled cluster singles, doubles, and triples calculations. Errors inherent in the pseudopotentials are also compared with those arising from a number of approximations commonly used with pseudopotentials. The eCEPPs provide a significant improvement in optimised geometries and dissociation energies for small molecules, with errors for the latter being an order-of-magnitude smaller than for Hartree-Fock-based pseudopotentials available in the literature. Gaussian basis sets are optimised for use with these pseudopotentials. PMID:28571391
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
1997-01-01
The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
NASA Technical Reports Server (NTRS)
Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward
2007-01-01
A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
NASA Technical Reports Server (NTRS)
Staats, Matt
2009-01-01
We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.
NASA Astrophysics Data System (ADS)
Biscarros, D.; Cantenot, C.; Séronie-Vivien, J.; Schmidt, G.
AstroBus on-board software is a customisable software for ERC32 based avionics implementing standard ESA Packet Utilization Standard functions. Its architecture based on generic design templates and relying on a library providing standard PUS TC, TM and event services enhances its reusability on various programs. Finally, AstroBus on-board software development and validation environment is based on last generation tools providing an optimised customisation environment.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
NASA Astrophysics Data System (ADS)
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
Automated event generation for loop-induced processes
Hirschi, Valentin; Mattelaer, Olivier
2015-10-22
We present the first fully automated implementation of cross-section computation and event generation for loop-induced processes. This work is integrated in the MadGraph5_aMC@NLO framework. We describe the optimisations implemented at the level of the matrix element evaluation, phase space integration and event generation allowing for the simulation of large multiplicity loop-induced processes. Along with some selected differential observables, we illustrate our results with a table showing inclusive cross-sections for all loop-induced hadronic scattering processes with up to three final states in the SM as well as for some relevant 2 → 4 processes. Furthermore, many of these are computed heremore » for the first time.« less
NASA Astrophysics Data System (ADS)
Kersevan, Borut Paul; Richter-Waş, Elzbieta
2013-03-01
The AcerMC Monte Carlo generator is dedicated to the generation of Standard Model background processes which were recognised as critical for the searches at LHC, and generation of which was either unavailable or not straightforward so far. The program itself provides a library of the massive matrix elements (coded by MADGRAPH) and native phase space modules for generation of a set of selected processes. The hard process event can be completed by the initial and the final state radiation, hadronisation and decays through the existing interface with either PYTHIA, HERWIG or ARIADNE event generators and (optionally) TAUOLA and PHOTOS. Interfaces to all these packages are provided in the distribution version. The phase-space generation is based on the multi-channel self-optimising approach using the modified Kajantie-Byckling formalism for phase space construction and further smoothing of the phase space was obtained by using a modified ac-VEGAS algorithm. An additional improvement in the recent versions is the inclusion of the consistent prescription for matching the matrix element calculations with parton showering for a select list of processes. Catalogue identifier: ADQQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3853309 No. of bytes in distributed program, including test data, etc.: 68045728 Distribution format: tar.gz Programming language: FORTRAN 77 with popular extensions (g77, gfortran). Computer: All running Linux. Operating system: Linux. Classification: 11.2, 11.6. External routines: CERNLIB (http://cernlib.web.cern.ch/cernlib/), LHAPDF (http://lhapdf.hepforge.org/) Catalogue identifier of previous version: ADQQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 149(2003)142 Does the new version supersede the previous version?: Yes Nature of problem: Despite a large repertoire of processes implemented for generation in event generators like PYTHIA [1] or HERWIG [2] a number of background processes, crucial for studying the expected physics of the LHC experiments, is missing. For some of these processes the matrix element expressions are rather lengthy and/or to achieve a reasonable generation efficiency it is necessary to tailor the phase space selection procedure to the dynamics of the process. That is why it is not practical to imagine that any of the above general purpose generators will contain every, or even only observable, processes which will occur at LHC collisions. A more practical solution can be found in a library of dedicated matrix-element-based generators, with the standardised interfaces like that proposed in [3], to the more universal one which is used to complete the event generation. Solution method: The AcerMC EventGenerator provides a library of the matrix-element-based generators for several processes. The initial- and final-state showers, beam remnants and underlying events, fragmentation and remaining decays are supposed to be performed by the other universal generator to which this one is interfaced. We will call it a supervising generator. The interfaces to PYTHIA 6.4, ARIADNE 4.1 and HERWIG 6.5, as such generators, are provided. Provided is also an interface to TAUOLA [4] and PHOTOS [5] packages for τ-lepton decays (including spin correlations treatment) and QED radiations in decays of particles. At present, the following matrix-element-based processes have been implemented: gg,qq¯→tt¯bb¯, qq¯→W(→ℓν)bb¯; qq¯→W(→ℓν)tt¯; gg,qq¯→Z/γ∗(→ℓℓ)bb¯; gg,qq¯→Z/γ∗(→ℓℓ,νν,bb¯)tt¯; complete EW gg,qq¯→(Z/W/γ∗→)tt¯bb¯; gg,qq¯→tt¯tt¯; gg,qq¯→(tt¯→)ff¯bff¯b¯; gg,qq¯→(WWbb →)ff¯ff¯bb¯. Both interfaces allow the use of the LHAPDF/LHAGLUE library of parton density functions. Provided is also a set of control processes: qq¯→W→ℓν; qq¯→Z/γ∗→ℓℓ; gg,qq¯→tt¯ and gg→(tt¯→)WbWb¯; Reasons for new version: Implementation of several new processes and methods. Summary of revisions: Each version added new processes or functionalities, a detailed list is given in the section “Changes since AcerMC 1.0”. Restrictions: The package is optimised for the 14 TeV pp collision simulated in the LHC environment and also works at the achieved LHC energies of 7 TeV and 8 TeV. The consistency between results of the complete generation using PYTHIA 6.4 or HERWIG 6.5 interfaces is technically limited by the different approaches taken in both these generators for evaluating αQCD and αQED couplings and by the different models for fragmentation/hadronisation. For the consistency check, in the AcerMC library contains native coded definitions of the QCD and αQED. Using these native definitions leads to the same total cross-sections both with PYTHIA 6.4 or HERWIG 6.5 interfaces.
Reproducible Research in the Geosciences at Scale: Achievable Goal or Elusive Dream?
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Evans, B. J. K.
2016-12-01
Reproducibility is a fundamental tenant of the scientific method: it implies that any researcher, or a third party working independently, can duplicate any experiment or investigation and produce the same results. Historically computationally based research involved an individual using their own data and processing it in their own private area, often using software they wrote or inherited from close collaborators. Today, a researcher is likely to be part of a large team that will use a subset of data from an external repository and then process the data on a public or private cloud or on a large centralised supercomputer, using a mixture of their own code, third party software and libraries, or global community codes. In 'Big Geoscience' research it is common for data inputs to be extracts from externally managed dynamic data collections, where new data is being regularly appended, or existing data is revised when errors are detected and/or as processing methods are improved. New workflows increasingly use services to access data dynamically to create subsets on-the-fly from distributed sources, each of which can have a complex history. At major computational facilities, underlying systems, libraries, software and services are being constantly tuned and optimised, or as new or replacement infrastructure being installed. Likewise code used from a community repository is continually being refined, re-packaged and ported to the target platform. To achieve reproducibility, today's researcher increasingly needs to track their workflow, including querying information on the current or historical state of facilities used. Versioning methods are standard practice for software repositories or packages, but it is not common for either data repositories or data services to provide information about their state, or for systems to provide query-able access to changes in the underlying software. While a researcher can achieve transparency and describe steps in their workflow so that others can repeat them and replicate processes undertaken, they cannot achieve exact reproducibility or even transparency of results generated. In Big Geoscience, full reproducibiliy will be an elusive dream until data repositories and compute facilities can provide provenance information in a standards compliant, machine query-able way.
Radiological Protection and Nuclear Engineering Studies in Multi-MW Target Systems
NASA Astrophysics Data System (ADS)
Luis, Raul Fernandes
Several innovative projects involving nuclear technology have emerged around the world in recent years, for applications such as spallation neutron sources, accelerator-driven systems for the transmutation of nuclear waste and radioactive ion beam (RIB) production. While the available neutron Wuxes from nuclear reactors did not increase substantially in intensity over the past three decades, the intensities of neutron sources produced in spallation targets have increased steadily, and should continue to do so during the 21st century. Innovative projects like ESS, MYRRHA and EURISOL lie at the forefront of the ongoing pursuit for increasingly bright neutron sources; driven by proton beams with energies up to 2 GeV and intensities up to several mA, the construction of their proposed facilities involves complex Nuclear Technology and Radiological Protection design studies executed by multidisciplinary teams of scientists and engineers from diUerent branches of Science. The intense neutron Wuxes foreseen for those facilities can be used in several scientiVc research Velds, such as Nuclear Physics and Astrophysics, Medicine and Materials Science. In this work, the target systems of two facilitites for the production of RIBs using the Isotope Separation On-Line (ISOL) method were studied in detail: ISOLDE, operating at CERN since 1967, and EURISOL, the next-generation ISOL facility to be built in Europe. For the EURISOL multi-MW target station, a detailed study of Radiological Protection was carried out using the Monte Carlo code FLUKA. Simulations were done to assess neutron Wuences, Vssion rates, ambient dose equivalent rates during operation and after shutdown and the production of radioactive nuclei in the targets and surrounding materials. DiUerent materials were discussed for diUerent components of the target system, aiming at improving its neutronics performance while keeping the residual activities resulting from material activation as low as possible. The second goal of this work was to perform an optimisation study for the ISOLDE neutron converter and Vssion target system. The target system was simulated using FLUKA and the cross section codes TALYS and ABRABLA, with the objective of maximising the performance of the system for the production of pure beams of neutron-rich isotopes, suppressing the contaminations by undesired neutron-deficient isobars. Two alternative target systems were proposed in the optimisation studies; the simplest of the two, with some modiVcations, was built as a prototype and tested at ISOLDE. The experimental results clearly show that it is possible, with simple changes in the layouts of the target systems, to produce purer beams of neutron-rich isotopes around the doubly magic nuclei 78Ni and 132Sn. A study of Radiological Protection was also performed, comparing the performances of the prototype target system and the standard ISOLDE target system. None
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virtanen, E.; Haapalehto, T.; Kouhia, J.
1995-09-01
Three experiments were conducted to study the behavior of the new horizontal steam generator construction of the PACTEL test facility. In the experiments the secondary side coolant level was reduced stepwise. The experiments were calculated with two computer codes RELAP5/MOD3.1 and APROS version 2.11. A similar nodalization scheme was used for both codes to that the results may be compared. Only the steam generator was modelled and the rest of the facility was given as a boundary condition. The results show that both codes calculate well the behaviour of the primary side of the steam generator. On the secondary sidemore » both codes calculate lower steam temperatures in the upper part of the heat exchange tube bundle than was measured in the experiments.« less
Zarins-Tutt, Joseph S; Abraham, Emily R; Bailey, Christopher S; Goss, Rebecca J M
Nature provides a valuable resource of medicinally relevant compounds, with many antimicrobial and antitumor agents entering clinical trials being derived from natural products. The generation of analogues of these bioactive natural products is important in order to gain a greater understanding of structure activity relationships; probing the mechanism of action, as well as to optimise the natural product's bioactivity and bioavailability. This chapter critically examines different approaches to generating natural products and their analogues, exploring the way in which synthetic and biosynthetic approaches may be blended together to enable expeditious access to new designer natural products.
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
NASA Astrophysics Data System (ADS)
Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.
2015-09-01
This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.
Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M
2015-09-07
This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.
Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
IGB grid: User's manual (A turbomachinery grid generation code)
NASA Technical Reports Server (NTRS)
Beach, T. A.; Hoffman, G.
1992-01-01
A grid generation code called IGB is presented for use in computational investigations of turbomachinery flowfields. It contains a combination of algebraic and elliptic techniques coded for use on an interactive graphics workstation. The instructions for use and a test case are included.
TIGER: Turbomachinery interactive grid generation
NASA Technical Reports Server (NTRS)
Soni, Bharat K.; Shih, Ming-Hsin; Janus, J. Mark
1992-01-01
A three dimensional, interactive grid generation code, TIGER, is being developed for analysis of flows around ducted or unducted propellers. TIGER is a customized grid generator that combines new technology with methods from general grid generation codes. The code generates multiple block, structured grids around multiple blade rows with a hub and shroud for either C grid or H grid topologies. The code is intended for use with a Euler/Navier-Stokes solver also being developed, but is general enough for use with other flow solvers. TIGER features a silicon graphics interactive graphics environment that displays a pop-up window, graphics window, and text window. The geometry is read as a discrete set of points with options for several industrial standard formats and NASA standard formats. Various splines are available for defining the surface geometries. Grid generation is done either interactively or through a batch mode operation using history files from a previously generated grid. The batch mode operation can be done either with a graphical display of the interactive session or with no graphics so that the code can be run on another computer system. Run time can be significantly reduced by running on a Cray-YMP.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).
Fractal-Based Image Compression
1989-09-01
6. A Mercedes Benz symbol generated using an IFS code ................. 21 7. (a) U-A fern and (b) A-0 fern generated with RIFS codes...22 8. Construction of the Mercedes - Benz symbol using RIFS ................ 23 9. The regenerated perfect image of the Mercedes - Benz symbol using R IF...quite often, it cannot be done with a reasonable number of transforms. As an example, the Mercedes Benz symbol generated using an IFS code is illustrated
Model-Driven Engineering: Automatic Code Generation and Beyond
2015-03-01
and Weblogic as well as cloud environments such as Mi- crosoft Azure and Amazon Web Services®. Finally, while the generated code has dependencies on...code generation in the context of the full system lifecycle from development to sustainment. Acquisition programs in govern- ment or large commercial...Acquirers are concerned with the full system lifecycle, and they need confidence that the development methods will enable the system to meet the functional
Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns
NASA Technical Reports Server (NTRS)
Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.
2006-01-01
Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
SIGACE Code for Generating High-Temperature ACE Files; Validation and Benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Amit R.; Ganesan, S.; Trkov, A.
2005-05-24
A code named SIGACE has been developed as a tool for MCNP users within the scope of a research contract awarded by the Nuclear Data Section of the International Atomic Energy Agency (IAEA) (Ref: 302-F4-IND-11566 B5-IND-29641). A new recipe has been evolved for generating high-temperature ACE files for use with the MCNP code. Under this scheme the low-temperature ACE file is first converted to an ENDF formatted file using the ACELST code and then Doppler broadened, essentially limited to the data in the resolved resonance region, to any desired higher temperature using SIGMA1. The SIGACE code then generates a high-temperaturemore » ACE file for use with the MCNP code. A thinning routine has also been introduced in the SIGACE code for reducing the size of the ACE files. The SIGACE code and the recipe for generating ACE files at higher temperatures has been applied to the SEFOR fast reactor benchmark problem (sodium-cooled fast reactor benchmark described in ENDF-202/BNL-19302, 1974 document). The calculated Doppler coefficient is in good agreement with the experimental value. A similar calculation using ACE files generated directly with the NJOY system also agrees with our SIGACE computed results. The SIGACE code and the recipe is further applied to study the numerical benchmark configuration of selected idealized PWR pin cell configurations with five different fuel enrichments as reported by Mosteller and Eisenhart. The SIGACE code that has been tested with several FENDL/MC files will be available, free of cost, upon request, from the Nuclear Data Section of the IAEA.« less
Alkalizing Reactions Streamline Cellular Metabolism in Acidogenic Microorganisms
Arioli, Stefania; Ragg, Enzio; Scaglioni, Leonardo; Fessas, Dimitrios; Signorelli, Marco; Karp, Matti; Daffonchio, Daniele; De Noni, Ivano; Mulas, Laura; Oggioni, Marco; Guglielmetti, Simone; Mora, Diego
2010-01-01
An understanding of the integrated relationships among the principal cellular functions that govern the bioenergetic reactions of an organism is necessary to determine how cells remain viable and optimise their fitness in the environment. Urease is a complex enzyme that catalyzes the hydrolysis of urea to ammonia and carbonic acid. While the induction of urease activity by several microorganisms has been predominantly considered a stress-response that is initiated to generate a nitrogen source in response to a low environmental pH, here we demonstrate a new role of urease in the optimisation of cellular bioenergetics. We show that urea hydrolysis increases the catabolic efficiency of Streptococcus thermophilus, a lactic acid bacterium that is widely used in the industrial manufacture of dairy products. By modulating the intracellular pH and thereby increasing the activity of β-galactosidase, glycolytic enzymes and lactate dehydrogenase, urease increases the overall change in enthalpy generated by the bioenergetic reactions. A cooperative altruistic behaviour of urease-positive microorganisms on the urease-negative microorganisms within the same environment was also observed. The physiological role of a single enzymatic activity demonstrates a novel and unexpected view of the non-transcriptional regulatory mechanisms that govern the bioenergetics of a bacterial cell, highlighting a new role for cytosol-alkalizing biochemical pathways in acidogenic microorganisms. PMID:21152088
Exergy analysis and optimisation of waste heat recovery systems for cement plants
NASA Astrophysics Data System (ADS)
Mohammadi, Amin; Ashjari, Muhammad Ali; Sadreddini, Amirhassan
2018-02-01
In the last decades, heat recovery systems have received much attention due to the increase in fuel cost and the increase in environmental issues. In this study, different heat recovery systems for a cement plant are compared in terms of electricity generation and exergy analysis. The heat sources are available in high temperature (HT) and low temperature (LT). For the HT section a dual pressure Rankine cycle, a simple dual pressure Organic Rankine Cycle (ORC) and a regenerative dual pressure ORC are compared. Also, for the LT section, a simple ORC is compared with transcritical carbon dioxide cycle. To find the best system, an optimisation algorithm is applied to all proposed cycles. The results show that for the HT section, regenerative ORC has the highest exergy efficiency and has the capability of producing nearly 7 MW electricity for a cement factory with the capacity of 3400 ton per day. The main reason for this is introducing the regenerative heat exchanger to the cycle. For the LT section, ORC showed a better performance than the CO2 cycle. It is worth mentioning that the generated power in this section is far lower than that of the HT section and is equal to nearly 300 kW.
Creation and Delivery of New Superpixelized DIRBE Map Products
NASA Technical Reports Server (NTRS)
Weiland, J.
1998-01-01
Phase 1 called for the following tasks: (1) completion of code to generate intermediate files containing the individual DIRBE observations which would be used to make the superpixelized maps; (2) completion of code necessary to generate the maps themselves; and (3) quality control on test-case maps in the form of point-source extraction and photometry. Items 1 and 2 are well in hand and the tested code is nearly complete. A few test maps have been generated for the tests mentioned in item 3. Map generation is not in production mode yet.
Yoshida, Wako; Dolan, Ray J.; Friston, Karl J.
2008-01-01
This paper introduces a model of ‘theory of mind’, namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a ‘game theory of mind’. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a ‘stag-hunt’. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution. PMID:19112488
NASA Astrophysics Data System (ADS)
Rezrazi, Ahmed; Hanini, Salah; Laidi, Maamar
2016-02-01
The right design and the high efficiency of solar energy systems require accurate information on the availability of solar radiation. Due to the cost of purchase and maintenance of the radiometers, these data are not readily available. Therefore, there is a need to develop alternative ways of generating such data. Artificial neural networks (ANNs) are excellent and effective tools for learning, pinpointing or generalising data regularities, as they have the ability to model nonlinear functions; they can also cope with complex `noisy' data. The main objective of this paper is to show how to reach an optimal model of ANNs for applying in prediction of solar radiation. The measured data of the year 2007 in Ghardaïa city (Algeria) are used to demonstrate the optimisation methodology. The performance evaluation and the comparison of results of ANN models with measured data are made on the basis of mean absolute percentage error (MAPE). It is found that MAPE in the ANN optimal model reaches 1.17 %. Also, this model yields a root mean square error (RMSE) of 14.06 % and an MBE of 0.12. The accuracy of the outputs exceeded 97 % and reached up 99.29 %. Results obtained indicate that the optimisation strategy satisfies practical requirements. It can successfully be generalised for any location in the world and be used in other fields than solar radiation estimation.
A Combinatorial Geometry Computer Description of the MEP-021A Generator Set
1979-02-01
Generator Computer Description Gasoline Generator GIFT MEP-021A 20. ABSTRACT fCbntteu* an rararaa eta* ft namamwaay anal Identify by block number) This... GIFT code is also stored on magnetic tape for future vulnerability analysis. 00,] *7,1473 EDITION OF • NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY...the Geometric Information for Targets ( GIFT ) computer code. The GIFT code traces shotlines through a COM-GEOM description from any specified attack
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
Automatic Testcase Generation for Flight Software
NASA Technical Reports Server (NTRS)
Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.
2008-01-01
The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.
Modeling the small-scale dish-mounted solar thermal Brayton cycle
NASA Astrophysics Data System (ADS)
Le Roux, Willem G.; Meyer, Josua P.
2016-05-01
The small-scale dish-mounted solar thermal Brayton cycle (STBC) makes use of a sun-tracking dish reflector, solar receiver, recuperator and micro-turbine to generate power in the range of 1-20 kW. The modeling of such a system, using a turbocharger as micro-turbine, is required so that optimisation and further development of an experimental setup can be done. As a validation, an analytical model of the small-scale STBC in Matlab, where the net power output is determined from an exergy analysis, is compared with Flownex, an integrated systems CFD code. A 4.8 m diameter parabolic dish with open-cavity tubular receiver and plate-type counterflow recuperator is considered, based on previous work. A dish optical error of 10 mrad, a tracking error of 1° and a receiver aperture area of 0.25 m × 0.25 m are considered. Since the recuperator operates at a very high average temperature, the recuperator is modeled using an updated ɛ-NTU method which takes heat loss to the environment into consideration. Compressor and turbine maps from standard off-the-shelf Garrett turbochargers are used. The results show that for the calculation of the steady-state temperatures and pressures, there is good comparison between the Matlab and Flownex results (within 8%) except for the recuperator outlet temperature, which is due to the use of different ɛ-NTU methods. With the use of Matlab and Flownex, it is shown that the small-scale open STBC with an existing off-the-shelf turbocharger could generate a positive net power output with solar-to-mechanical efficiency of up to 12%, with much room for improvement.
LFRic: Building a new Unified Model
NASA Astrophysics Data System (ADS)
Melvin, Thomas; Mullerworth, Steve; Ford, Rupert; Maynard, Chris; Hobson, Mike
2017-04-01
The LFRic project, named for Lewis Fry Richardson, aims to develop a replacement for the Met Office Unified Model in order to meet the challenges which will be presented by the next generation of exascale supercomputers. This project, a collaboration between the Met Office, STFC Daresbury and the University of Manchester, builds on the earlier GungHo project to redesign the dynamical core, in partnership with NERC. The new atmospheric model aims to retain the performance of the current ENDGame dynamical core and associated subgrid physics, while also enabling a far greater scalability and flexibility to accommodate future supercomputer architectures. Design of the model revolves around a principle of a 'separation of concerns', whereby the natural science aspects of the code can be developed without worrying about the underlying architecture, while machine dependent optimisations can be carried out at a high level. These principles are put into practice through the development of an autogenerated Parallel Systems software layer (known as the PSy layer) using a domain-specific compiler called PSyclone. The prototype model includes a re-write of the dynamical core using a mixed finite element method, in which different function spaces are used to represent the various fields. It is able to run in parallel with MPI and OpenMP and has been tested on over 200,000 cores. In this talk an overview of the both the natural science and computational science implementations of the model will be presented.
Food for patients at nutritional risk: a model of food sensory quality to promote intake.
Sorensen, Janice; Holm, Lotte; Frøst, Michael Bom; Kondrup, Jens
2012-10-01
The aim was to investigate food sensory quality as experienced and perceived by patients at nutritional risk within the context of establishing a framework to develop foods to develop foods to promote intake. Patients at nutritional risk (NRS-2002; food intake ≤ 75% of requirements) were observed at meals in hospital (food choice, hunger/fullness/appetite scores). This was followed by a semi-structured interview based on the observations and focusing on food sensory perception and eating ability as related to food quality. Two weeks post-discharge, a 3-day food record was taken and interviews were repeated by phone. Interviews were transcribed, coded, and analysed thematically. Patients (N = 22) from departments of gastrointestinal surgery, oncology, infectious medicine, cardiology, and hepatology were interviewed at meals (N = 65) in hospital (82%) and post-discharge (18%). Food sensory perception and eating ability dictated specific food sensory needs (i.e., appearance, aroma, taste, texture, temperature, and variety defining food sensory quality to promote intake) within the context of motivation to eat including: pleasure, comfort, and survival. Patients exhibited large inter- and intra-individual variability in their food sensory needs. The study generated a model for optimising food sensory quality and developing user-driven, innovative foods to promote intake in patients at nutritional risk. Copyright © 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Time for change: a roadmap to guide the implementation of the World Anti-Doping Code 2015
Dvorak, Jiri; Baume, Norbert; Botré, Francesco; Broséus, Julian; Budgett, Richard; Frey, Walter O; Geyer, Hans; Harcourt, Peter Rex; Ho, Dave; Howman, David; Isola, Victor; Lundby, Carsten; Marclay, François; Peytavin, Annie; Pipe, Andrew; Pitsiladis, Yannis P; Reichel, Christian; Robinson, Neil; Rodchenkov, Grigory; Saugy, Martial; Sayegh, Souheil; Segura, Jordi; Thevis, Mario; Vernec, Alan; Viret, Marjolaine; Vouillamoz, Marc; Zorzoli, Mario
2014-01-01
A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward. PMID:24764550
Time for change: a roadmap to guide the implementation of the World Anti-Doping Code 2015.
Dvorak, Jiri; Baume, Norbert; Botré, Francesco; Broséus, Julian; Budgett, Richard; Frey, Walter O; Geyer, Hans; Harcourt, Peter Rex; Ho, Dave; Howman, David; Isola, Victor; Lundby, Carsten; Marclay, François; Peytavin, Annie; Pipe, Andrew; Pitsiladis, Yannis P; Reichel, Christian; Robinson, Neil; Rodchenkov, Grigory; Saugy, Martial; Sayegh, Souheil; Segura, Jordi; Thevis, Mario; Vernec, Alan; Viret, Marjolaine; Vouillamoz, Marc; Zorzoli, Mario
2014-05-01
A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward.
Benavente, L; Villanueva, M J; Vega, P; Casado, I; Vidal, J A; Castaño, B; Amorín, M; de la Vega, V; Santos, H; Trigo, A; Gómez, M B; Larrosa, D; Temprano, T; González, M; Murias, E; Calleja, S
2016-04-01
Intravenous thrombolysis with alteplase is an effective treatment for ischaemic stroke when applied during the first 4.5 hours, but less than 15% of patients have access to this technique. Mechanical thrombectomy is more frequently able to recanalise proximal occlusions in large vessels, but the infrastructure it requires makes it even less available. We describe the implementation of code stroke in Asturias, as well as the process of adapting various existing resources for urgent stroke care in the region. By considering these resources, and the demographic and geographic circumstances of our region, we examine ways of reorganising the code stroke protocol that would optimise treatment times and provide the most appropriate treatment for each patient. We distributed the 8 health districts in Asturias so as to permit referral of candidates for reperfusion therapies to either of the 2 hospitals with 24-hour stroke units and on-call neurologists and providing IV fibrinolysis. Hospitals were assigned according to proximity and stroke severity; the most severe cases were immediately referred to the hospital with on-call interventional neurology care. Patient triage was provided by pre-hospital emergency services according to the NIHSS score. Modifications to code stroke in Asturias have allowed us to apply reperfusion therapies with good results, while emphasising equitable care and managing the severity-time ratio to offer the best and safest treatment for each patient as soon as possible. Copyright © 2015 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.
Comparison of three coding strategies for a low cost structure light scanner
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox
NASA Astrophysics Data System (ADS)
Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano
2018-03-01
Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Almén, Anja; Båth, Magnus
2016-06-01
The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.
Construction of self-dual codes in the Rosenbloom-Tsfasman metric
NASA Astrophysics Data System (ADS)
Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin
2017-12-01
Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
Ultra-High Capacity Networking Enabled By Optical Technologies
2003-08-01
interface that raises an interrupt regardless of what the kernel may be doing. Piglet also uses fast wakeup signals to optimise scheduling in certain...executed which switches context driectly to the now-runnable application i.e., without invoking the scheduler . Since the fast wakeup handler does not...and scheduling , the first one has been built and tested. 10 Gb/s data patterns have been generated with this board. A systems experiment has
pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data
NASA Astrophysics Data System (ADS)
Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.
The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.
Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M
2017-05-01
The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.
NASA Astrophysics Data System (ADS)
Sheikholeslami, Ghazal; Griffiths, Jonathan; Dearden, Geoff; Edwardson, Stuart P.
Laser forming (LF) has been shown to be a viable alternative to form automotive grade advanced high strength steels (AHSS). Due to their high strength, heat sensitivity and low conventional formability show early fractures, larger springback, batch-to-batch inconsistency and high tool wear. In this paper, optimisation of the LF process parameters has been conducted to further understand the impact of a surface heat treatment on DP1000. A FE numerical simulation has been developed to analyse the dynamic thermo-mechanical effects. This has been verified against empirical data. The goal of the optimisation has been to develop a usable process window for the LF of AHSS within strict metallurgical constraints. Results indicate it is possible to LF this material, however a complex relationship has been found between the generation and maintenance of hardness values in the heated zone. A laser surface hardening effect has been observed that could be beneficial to the efficiency of the process.
Load optimised piezoelectric generator for powering battery-less TPMS
NASA Astrophysics Data System (ADS)
Blažević, D.; Kamenar, E.; Zelenika, S.
2013-05-01
The design of a piezoelectric device aimed at harvesting the kinetic energy of random vibrations on a vehicle's wheel is presented. The harvester is optimised for powering a Tire Pressure Monitoring System (TPMS). On-road experiments are performed in order to measure the frequencies and amplitudes of wheels' vibrations. It is hence determined that the highest amplitudes occur in an unperiodic manner. Initial tests of the battery-less TPMS are performed in laboratory conditions where tuning and system set-up optimization is achieved. The energy obtained from the piezoelectric bimorph is managed by employing the control electronics which converts AC voltage to DC and conditions the output voltage to make it compatible with the load (i.e. sensor electronics and transmitter). The control electronics also manages the sleep/measure/transmit cycles so that the harvested energy is efficiently used. The system is finally tested in real on-road conditions successfully powering the pressure sensor and transmitting the data to a receiver in the car cockpit.
Gulbin, Jason P; Croser, Morag J; Morley, Elissa J; Weissensteiner, Juanita R
2013-01-01
This paper introduces a new sport and athlete development framework that has been generated by multidisciplinary sport practitioners. By combining current theoretical research perspectives with extensive empirical observations from one of the world's leading sport agencies, the proposed FTEM (Foundations, Talent, Elite, Mastery) framework offers broad utility to researchers and sporting stakeholders alike. FTEM is unique in comparison with alternative models and frameworks, because it: integrates general and specialised phases of development for participants within the active lifestyle, sport participation and sport excellence pathways; typically doubles the number of developmental phases (n = 10) in order to better understand athlete transition; avoids chronological and training prescriptions; more optimally establishes a continuum between participation and elite; and allows full inclusion of many developmental support drivers at the sport and system levels. The FTEM framework offers a viable and more flexible alternative for those sporting stakeholders interested in managing, optimising, and researching sport and athlete development pathways.
H2/H∞ control for grid-feeding converter considering system uncertainty
NASA Astrophysics Data System (ADS)
Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang
2017-05-01
Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A
2013-10-29
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY
2012-08-28
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
NASA Technical Reports Server (NTRS)
Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.
1989-01-01
The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
WinTRAX: A raytracing software package for the design of multipole focusing systems
NASA Astrophysics Data System (ADS)
Grime, G. W.
2013-07-01
The software package TRAX was a simulation tool for modelling the path of charged particles through linear cylindrical multipole fields described by analytical expressions and was a development of the earlier OXRAY program (Grime and Watt, 1983; Grime et al., 1982) [1,2]. In a 2005 comparison of raytracing software packages (Incerti et al., 2005) [3], TRAX/OXRAY was compared with Geant4 and Zgoubi and was found to give close agreement with the more modern codes. TRAX was a text-based program which was only available for operation in a now rare VMS workstation environment, so a new program, WinTRAX, has been developed for the Windows operating system. This implements the same basic computing strategy as TRAX, and key sections of the code are direct translations from FORTRAN to C++, but the Windows environment is exploited to make an intuitive graphical user interface which simplifies and enhances many operations including system definition and storage, optimisation, beam simulation (including with misaligned elements) and aberration coefficient determination. This paper describes the program and presents comparisons with other software and real installations.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
Design Optimisation of a Magnetic Field Based Soft Tactile Sensor
Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert
2017-01-01
This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
NASA Astrophysics Data System (ADS)
Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper
2015-05-01
In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.
NASA Astrophysics Data System (ADS)
Selva Bhuvaneswari, K.; Geetha, P.
2017-05-01
Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.
NASA Astrophysics Data System (ADS)
Benkrid, K.; Belkacemi, S.; Sukhsawas, S.
2005-06-01
This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.
a Framework for Distributed Mixed Language Scientific Applications
NASA Astrophysics Data System (ADS)
Quarrie, D. R.
The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.
QX MAN: Q and X file manipulation
NASA Technical Reports Server (NTRS)
Krein, Mark A.
1992-01-01
QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.
Automatically generated code for relativistic inhomogeneous cosmologies
NASA Astrophysics Data System (ADS)
Bentivegna, Eloisa
2017-02-01
The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begovich, C.L.; Eckerman, K.F.; Schlatter, E.C.
1981-08-01
The DARTAB computer code combines radionuclide environmental exposure data with dosimetric and health effects data to generate tabulations of the predicted impact of radioactive airborne effluents. DARTAB is independent of the environmental transport code used to generate the environmental exposure data and the codes used to produce the dosimetric and health effects data. Therefore human dose and risk calculations need not be added to every environmental transport code. Options are included in DARTAB to permit the user to request tabulations by various topics (e.g., cancer site, exposure pathway, etc.) to facilitate characterization of the human health impacts of the effluents.more » The DARTAB code was written at ORNL for the US Environmental Protection Agency, Office of Radiation Programs.« less
Development of an Automatic Differentiation Version of the FPX Rotor Code
NASA Technical Reports Server (NTRS)
Hu, Hong
1996-01-01
The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Green, Lawrence; Carle, Alan; Fagan, Mike
1999-01-01
Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
NASA Astrophysics Data System (ADS)
Bozic, O.; Longo, J. M.; Giese, P.; Behren, J.
2005-02-01
The electromagnetic railgun technology appears to be an interesting alternative to launch small payloads into Low Earth Orbit (LEO), as this may introduce lower launch costs. A high-end solution, based upon present state of the art technology, has been investigated to derive the technical boundary conditions for the application of such a new system. This paper presents the main concept and the design aspects of such propelled projectile with special emphasis on flight mechanics, aero-/thermodynamics, materials and propulsion characteristics. Launch angles and trajectory optimisation analyses are carried out by means of 3 degree of freedom simulations (3DOF). The aerodynamic form of the projectile is optimised to provoke minimum drag and low heat loads. The surface temperature distribution for critical zones is calculated with DLR developed Navier-Stokes codes TAU, HOTSOSE, whereas the engineering tool HF3T is used for time dependent calculations of heat loads and temperatures on project surface and inner structures. Furthermore, competing propulsions systems are considered for the rocket engines of both stages. The structural mass is analysed mostly on the basis of carbon fibre reinforced materials as well as classical aerospace metallic materials. Finally, this paper gives a critical overview of the technical feasibility and cost of small rockets for such missions. Key words: micro-satellite, two-stage-rocket, railgun, rocket-engines, aero/thermodynamic, mass optimization
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz
2018-03-01
The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Refinetti, Paulo; Morgenthaler, Stephan; Ekstrøm, Per O
2016-07-01
Cycling temperature capillary electrophoresis has been optimised for mutation detection in 76% of the mitochondrial genome. The method was tested on a mixed sample and compared to mutation detection by next generation sequencing. Out of 152 fragments 90 were concordant, 51 discordant and in 11 were semi-concordant. Dilution experiments show that cycling capillary electrophoresis has a detection limit of 1-3%. The detection limit of routine next generation sequencing was in the ranges of 15 to 30%. Cycling temperature capillary electrophoresis detect and accurate quantify mutations at a fraction of the cost and time required to perform a next generation sequencing analysis. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.A.
1997-07-01
The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar
NASA Astrophysics Data System (ADS)
Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd
2017-05-01
Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.
Jordan, Wolfgang; Adler, Lothar; Bleich, Stefan; von Einsiedel, Regina; Falkai, Peter; Grosskopf, Volker; Hauth, Iris; Steiner, Johann; Cohrs, Stefan
2011-11-01
Increasing psychiatric disorder treatment need, increased work load, changes in the working hour regulations, the nation-wide shortage of physicians, efficiency principle and economisation can necessitate a reorganisation of medical services. The essential steps and instruments of process optimisation in medical services for a psychiatric clinic are elucidated and discussed in the context of demographic changes, generation change, and a new concept of values. © Georg Thieme Verlag KG Stuttgart · New York.
Multiobjective optimisation of bogie suspension to boost speed on curves
NASA Astrophysics Data System (ADS)
Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor
2016-01-01
To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.
Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy
NASA Astrophysics Data System (ADS)
Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.
2017-08-01
We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95 <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase <200 ms and for changes in the breathing period of <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.
Nodal network generator for CAVE3
NASA Technical Reports Server (NTRS)
Palmieri, J. V.; Rathjen, K. A.
1982-01-01
A new extension of CAVE3 code was developed that automates the creation of a finite difference math model in digital form ready for input to the CAVE3 code. The new software, Nodal Network Generator, is broken into two segments. One segment generates the model geometry using a Tektronix Tablet Digitizer and the other generates the actual finite difference model and allows for graphic verification using Tektronix 4014 Graphic Scope. Use of the Nodal Network Generator is described.
XSECT: A computer code for generating fuselage cross sections - user's manual
NASA Technical Reports Server (NTRS)
Ames, K. R.
1982-01-01
A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.
Rekadwad, Bhagwan N; Khobragade, Chandrahasya N
2016-06-01
Microbiologists are routinely engaged isolation, identification and comparison of isolated bacteria for their novelty. 16S rRNA sequences of Bacillus pumilus were retrieved from NCBI repository and generated QR codes for sequences (FASTA format and full Gene Bank information). 16SrRNA were used to generate quick response (QR) codes of Bacillus pumilus isolated from Lonar Crator Lake (19° 58' N; 76° 31' E), India. Bacillus pumilus 16S rRNA gene sequences were used to generate CGR, FCGR and PCA. These can be used for visual comparison and evaluation respectively. The hyperlinked QR codes, CGR, FCGR and PCA of all the isolates are made available to the users on a portal https://sites.google.com/site/bhagwanrekadwad/. This generated digital data helps to evaluate and compare any Bacillus pumilus strain, minimizes laboratory efforts and avoid misinterpretation of the species.
Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian
2005-01-01
This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
Formal Safety Certification of Aerospace Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2005-01-01
In principle, formal methods offer many advantages for aerospace software development: they can help to achieve ultra-high reliability, and they can be used to provide evidence of the reliability claims which can then be subjected to external scrutiny. However, despite years of research and many advances in the underlying formalisms of specification, semantics, and logic, formal methods are not much used in practice. In our opinion this is related to three major shortcomings. First, the application of formal methods is still expensive because they are labor- and knowledge-intensive. Second, they are difficult to scale up to complex systems because they are based on deep mathematical insights about the behavior of the systems (t.e., they rely on the "heroic proof"). Third, the proofs can be difficult to interpret, and typically stand in isolation from the original code. In this paper, we describe a tool for formally demonstrating safety-relevant aspects of aerospace software, which largely circumvents these problems. We focus on safely properties because it has been observed that safety violations such as out-of-bounds memory accesses or use of uninitialized variables constitute the majority of the errors found in the aerospace domain. In our approach, safety means that the program will not violate a set of rules that can range for the simple memory access rules to high-level flight rules. These different safety properties are formalized as different safety policies in Hoare logic, which are then used by a verification condition generator along with the code and logical annotations in order to derive formal safety conditions; these are then proven using an automated theorem prover. Our certification system is currently integrated into a model-based code generation toolset that generates the annotations together with the code. However, this automated formal certification technology is not exclusively constrained to our code generator and could, in principle, also be integrated with other code generators such as RealTime Workshop or even applied to legacy code. Our approach circumvents the historical problems with formal methods by increasing the degree of automation on all levels. The restriction to safety policies (as opposed to arbitrary functional behavior) results in simpler proof problems that can generally be solved by fully automatic theorem proves. An automated linking mechanism between the safety conditions and the code provides some of the traceability mandated by process standards such as DO-178B. An automated explanation mechanism uses semantic markup added by the verification condition generator to produce natural-language explanations of the safety conditions and thus supports their interpretation in relation to the code. It shows an automatically generated certification browser that lets users inspect the (generated) code along with the safety conditions (including textual explanations), and uses hyperlinks to automate tracing between the two levels. Here, the explanations reflect the logical structure of the safety obligation but the mechanism can in principle be customized using different sets of domain concepts. The interface also provides some limited control over the certification process itself. Our long-term goal is a seamless integration of certification, code generation, and manual coding that results in a "certified pipeline" in which specifications are automatically transformed into executable code, together with the supporting artifacts necessary for achieving and demonstrating the high level of assurance needed in the aerospace domain.
NASA Astrophysics Data System (ADS)
Koechl, F.; Loarte, A.; Parail, V.; Belo, P.; Brix, M.; Corrigan, G.; Harting, D.; Koskela, T.; Kukushkin, A. S.; Polevoi, A. R.; Romanelli, M.; Saibene, G.; Sartori, R.; Eich, T.; Contributors, JET
2017-08-01
The dynamics for the transition from L-mode to a stationary high Q DT H-mode regime in ITER is expected to be qualitatively different to present experiments. Differences may be caused by a low fuelling efficiency of recycling neutrals, that influence the post transition plasma density evolution on the one hand. On the other hand, the effect of the plasma density evolution itself both on the alpha heating power and the edge power flow required to sustain the H-mode confinement itself needs to be considered. This paper presents results of modelling studies of the transition to stationary high Q DT H-mode regime in ITER with the JINTRAC suite of codes, which include optimisation of the plasma density evolution to ensure a robust achievement of high Q DT regimes in ITER on the one hand and the avoidance of tungsten accumulation in this transient phase on the other hand. As a first step, the JINTRAC integrated models have been validated in fully predictive simulations (excluding core momentum transport which is prescribed) against core, pedestal and divertor plasma measurements in JET C-wall experiments for the transition from L-mode to stationary H-mode in partially ITER relevant conditions (highest achievable current and power, H 98,y ~ 1.0, low collisionality, comparable evolution in P net/P L-H, but different ρ *, T i/T e, Mach number and plasma composition compared to ITER expectations). The selection of transport models (core: NCLASS + Bohm/gyroBohm in L-mode/GLF23 in H-mode) was determined by a trade-off between model complexity and efficiency. Good agreement between code predictions and measured plasma parameters is obtained if anomalous heat and particle transport in the edge transport barrier are assumed to be reduced at different rates with increasing edge power flow normalised to the H-mode threshold; in particular the increase in edge plasma density is dominated by this edge transport reduction as the calculated neutral influx across the separatrix remains unchanged (or even slightly decreases) following the H-mode transition. JINTRAC modelling of H-mode transitions for the ITER 15 MA / 5.3 T high Q DT scenarios with the same modelling assumptions as those being derived from JET experiments has been carried out. The modelling finds that it is possible to access high Q DT conditions robustly for additional heating power levels of P AUX ⩾ 53 MW by optimising core and edge plasma fuelling in the transition from L-mode to high Q DT H-mode. An initial period of low plasma density, in which the plasma accesses the H-mode regime and the alpha heating power increases, needs to be considered after the start of the additional heating, which is then followed by a slow density ramp. Both the duration of the low density phase and the density ramp-rate depend on boundary and operational conditions and can be optimised to minimise the resistive flux consumption in this transition phase. The modelling also shows that fuelling schemes optimised for a robust access to high Q DT H-mode in ITER are also optimum for the prevention of the contamination of the core plasma by tungsten during this phase.
A simulation-optimization model for effective water resources management in the coastal zone
NASA Astrophysics Data System (ADS)
Spanoudaki, Katerina; Kampanis, Nikolaos
2015-04-01
Coastal areas are the most densely-populated areas in the world. Consequently water demand is high, posing great pressure on fresh water resources. Climatic change and its direct impacts on meteorological variables (e.g. precipitation) and indirect impact on sea level rise, as well as anthropogenic pressures (e.g. groundwater abstraction), are strong drivers causing groundwater salinisation and subsequently affecting coastal wetlands salinity with adverse effects on the corresponding ecosystems. Coastal zones are a difficult hydrologic environment to represent with a mathematical model due to the large number of contributing hydrologic processes and variable-density flow conditions. Simulation of sea level rise and tidal effects on aquifer salinisation and accurate prediction of interactions between coastal waters, groundwater and neighbouring wetlands requires the use of integrated surface water-groundwater mathematical models. In the past few decades several computer codes have been developed to simulate coupled surface and groundwater flow. However, most integrated surface water-groundwater models are based on the assumption of constant fluid density and therefore their applicability to coastal regions is questionable. Thus, most of the existing codes are not well-suited to represent surface water-groundwater interactions in coastal areas. To this end, the 3D integrated surface water-groundwater model IRENE (Spanoudaki et al., 2009; Spanoudaki, 2010) has been modified in order to simulate surface water-groundwater flow and salinity interactions in the coastal zone. IRENE, in its original form, couples the 3D shallow water equations to the equations describing 3D saturated groundwater flow of constant density. A semi-implicit finite difference scheme is used to solve the surface water flow equations, while a fully implicit finite difference scheme is used for the groundwater equations. Pollution interactions are simulated by coupling the advection-diffusion equation describing the fate and transport of contaminants introduced in a 3D turbulent flow field to the partial differential equation describing the fate and transport of contaminants in 3D transient groundwater flow systems. The model has been further developed to include the effects of density variations on surface water and groundwater flow, while the already built-in solute transport capabilities are used to simulate salinity interactions. The refined model is based on the finite volume method using a cell-centred structured grid, providing thus flexibility and accuracy in simulating irregular boundary geometries. For addressing water resources management problems, simulation models are usually externally coupled with optimisation-based management models. However this usually requires a very large number of iterations between the optimisation and simulation models in order to obtain the optimal management solution. As an alternative approach, for improved computational efficiency, an Artificial Neural Network (ANN) is trained as an approximate simulator of IRENE. The trained ANN is then linked to a Genetic Algorithm (GA) based optimisation model for managing salinisation problems in the coastal zone. The linked simulation-optimisation model is applied to a hypothetical study area for performance evaluation. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). References Spanoudaki, K., Stamou, A.I. and Nanou-Giannarou, A. (2009). Development and verification of a 3-D integrated surface water-groundwater model. Journal of Hydrology, 375 (3-4), 410-427. Spanoudaki, K. (2010). Integrated numerical modelling of surface water groundwater systems (in Greek). Ph.D. Thesis, National Technical University of Athens, Greece.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
GridMan: A grid manipulation system
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.; Wang, Zhu
1992-01-01
GridMan is an interactive grid manipulation system. It operates on grids to produce new grids which conform to user demands. The input grids are not constrained to come from any particular source. They may be generated by algebraic methods, elliptic methods, hyperbolic methods, parabolic methods, or some combination of methods. The methods are included in the various available structured grid generation codes. These codes perform the basic assembly function for the various elements of the initial grid. For block structured grids, the assembly can be quite complex due to a large number of clock corners, edges, and faces for which various connections and orientations must be properly identified. The grid generation codes are distinguished among themselves by their balance between interactive and automatic actions and by their modest variations in control. The basic form of GridMan provides a much more substantial level of grid control and will take its input from any of the structured grid generation codes. The communication link to the outside codes is a data file which contains the grid or section of grid.
Automatic finite element generators
NASA Technical Reports Server (NTRS)
Wang, P. S.
1984-01-01
The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.
Users manual for coordinate generation code CRDSRA
NASA Technical Reports Server (NTRS)
Shamroth, S. J.
1985-01-01
Generation of a viable coordinate system represents an important component of an isolated airfoil Navier-Stokes calculation. The manual describes a computer code for generation of such a coordinate system. The coordinate system is a general nonorthogonal one in which high resolution normal to the airfoil is obtained in the vicinity of the airfoil surface, and high resolution along the airfoil surface is obtained in the vicinity of the airfoil leading edge. The method of generation is a constructive technique which leads to a C type coordinate grid. The method of construction as well as input and output definitions are contained herein. The computer code itself as well as a sample output is being submitted to COSMIC.
A narrowband CDMA communications payload for little LEOS applications
NASA Astrophysics Data System (ADS)
Michalik, H.; Hävecker, W.; Ginati, A.
1996-09-01
In recent years Code Division Multiple Access (CDMA) techniques have been investigated for application in Local Area Networks [J. A. Salehi, IEEE Trans. Commun. 37 (1989)]as well as in Mobile Communications [R. Kohno et al., IEEE Commun. Mag. Jan (1995)]. The main attraction of these techniques is due to potential higher throughput and capacity of such systems under certain conditions compared to conventional multi-access schemes like frequency and time division multiplexing. Mobile communication over a Satellite Link represents in some terms the "worst case" for operating a CDMA-system. Considering e.g. the uplink case from mobile to satellite, the imperfections due to different and time varying channel conditions will add to the well known effects of Multiple Access Interference (MAI) between the simultaneously active users at the satellite receiver. In addition, bandwidth constraints due to the non-availability of large bandwidth channels in the interesting frequency bands, exist for small systems. As a result, for a given service in terms of user data rates, the practical code sequence lengths are limited as well as the available number of codes within a code set. In this paper a communications payload for Small Satellite Applications with CDMA uplink and C/TDMA downlink under the constraint of bandwidth limitations is proposed. To optimise the performance under the above addressed imperfections the system provides ability for power control and synchronisation for the CDMA uplink. The major objectives of this project are studying, development and testing of such a system for educational purposes and technology development at Hochschule Bremen.
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sublet, J.-Ch., E-mail: jean-christophe.sublet@ukaea.uk; Eastwood, J.W.; Morgan, J.G.
Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2more » and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.« less
FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling
NASA Astrophysics Data System (ADS)
Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.
2017-01-01
Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
NASA Astrophysics Data System (ADS)
Wang, W.; Liu, J.
2016-12-01
Forward modelling is the general way to obtain responses of geoelectrical structures. Field investigators might find it useful for planning surveys and choosing optimal electrode configurations with respect to their targets. During the past few decades much effort has been put into the development of numerical forward codes, such as integral equation method, finite difference method and finite element method. Nowadays, most researchers prefer the finite element method (FEM) for its flexible meshing scheme, which can handle models with complex geometry. Resistivity Modelling with commercial sofewares such as ANSYS and COMSOL is convenient, but like working with a black box. Modifying the existed codes or developing new codes is somehow a long period. We present a new way to obtain resistivity forward modelling codes quickly, which is based on the commercial sofeware FEPG (Finite element Program Generator). Just with several demanding scripts, FEPG could generate FORTRAN program framework which can easily be altered to adjust our targets. By supposing the electric potential is quadratic in each element of a two-layer model, we obtain quite accurate results with errors less than 1%, while more than 5% errors could appear by linear FE codes. The anisotropic half-space model is supposed to concern vertical distributed fractures. The measured apparent resistivities along the fractures are bigger than results from its orthogonal direction, which are opposite of the true resistivities. Interpretation could be misunderstood if this anisotropic paradox is ignored. The technique we used can obtain scientific codes in a short time. The generated powerful FORTRAN codes could reach accurate results by higher-order assumption and can handle anisotropy to make better interpretations. The method we used could be expand easily to other domain where FE codes are needed.
Palkowski, Marek; Bielecki, Wlodzimierz
2017-06-02
RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.
Grid Generation Techniques Utilizing the Volume Grid Manipulator
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
1998-01-01
This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.
Upgrades of Two Computer Codes for Analysis of Turbomachinery
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Liou, Meng-Sing
2005-01-01
Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.
Obstacle evasion in free-space optical communications utilizing Airy beams
NASA Astrophysics Data System (ADS)
Zhu, Guoxuan; Wen, Yuanhui; Wu, Xiong; Chen, Yujie; Liu, Jie; Yu, Siyuan
2018-03-01
A high speed free-space optical communication system capable of self-bending signal transmission around line-of-sight obstacles is proposed and demonstrated. Airy beams are generated and controlled to achieve different propagating trajectories, and the signal transmission characteristics of these beams around the obstacle are investigated. Our results confirm that, by optimising their ballistic trajectories, Airy beams are able to bypass obstacles with more signal energy and thus improve the communication performance compared with normal Gaussian beams.
Instrumental biosensors: new perspectives for the analysis of biomolecular interactions.
Nice, E C; Catimel, B
1999-04-01
The use of instrumental biosensors in basic research to measure biomolecular interactions in real time is increasing exponentially. Applications include protein-protein, protein-peptide, DNA-protein, DNA-DNA, and lipid-protein interactions. Such techniques have been applied to, for example, antibody-antigen, receptor-ligand, signal transduction, and nuclear receptor studies. This review outlines the principles of two of the most commonly used instruments and highlights specific operating parameters that will assist in optimising experimental design, data generation, and analysis.
Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia
2017-01-24
Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.
On the symbolic manipulation and code generation for elasto-plastic material matrices
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Saleeb, A. F.; Wang, P. S.; Tan, H. Q.
1991-01-01
A computerized procedure for symbolic manipulations and FORTRAN code generation of an elasto-plastic material matrix for finite element applications is presented. Special emphasis is placed on expression simplifications during intermediate derivations, optimal code generation, and interface with the main program. A systematic procedure is outlined to avoid redundant algebraic manipulations. Symbolic expressions of the derived material stiffness matrix are automatically converted to RATFOR code which is then translated into FORTRAN statements through a preprocessor. To minimize the interface problem with the main program, a template file is prepared so that the translated FORTRAN statements can be merged into the file to form a subroutine (or a submodule). Three constitutive models; namely, von Mises plasticity, Drucker-Prager model, and a concrete plasticity model, are used as illustrative examples.
Making extreme computations possible with virtual machines
NASA Astrophysics Data System (ADS)
Reuter, J.; Chokoufe Nejad, B.; Ohl, T.
2016-10-01
State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.
Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E
2018-04-09
Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.
Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar
2016-10-01
Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.
ERIC Educational Resources Information Center
Craven, Timothy C.
1982-01-01
Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)
A universal preconditioner for simulating condensed phase materials.
Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor
2016-04-28
We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.
A universal preconditioner for simulating condensed phase materials
NASA Astrophysics Data System (ADS)
Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor
2016-04-01
We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.
Hansson, M G
2008-01-01
Biobank research has been the focus of great interest of scholars and regulatory bodies who have addressed different ethical issues. On the basis of a review of the literature it may be concluded that, regarding some major themes in this discussion, a consensus seems to emerge on the international scene after the regular exchange of arguments in scientific journals. Broad or general consent is emerging as the generally preferred solution for biobank studies and straightforward instructions for coding will optimise privacy while facilitating research that may result in new methods for the prevention of disease and for medical treatment. The difficult question regarding the return of information to research subjects is the focus of the current research, but a helpful analysis of some of the issues at stake and concrete recommendations have recently been suggested. PMID:19034276
Simulations On Pair Creation In Collision Of γ-Beams Produced With High Intensity Lasers
NASA Astrophysics Data System (ADS)
Jansen, Oliver; Ribeyre, Xavier; D'Humieres, Emmanuel; Lobet, Mathieu; Jequier, Sophie; Tikhonchuk, Vladimir
2016-10-01
Direct production of electron-positron pairs in two photon collisions, the Breit-Wheeler process, is one of the most basic processes in the universe. However, this process has never been directly observed in the laboratory due to the lack of high intensity γ sources. For a feasibility study and for the optimisation of experimental set-ups we developed a high-performance tree-code. Different possible set-ups with MeV photon sources were discussed and compared using collision detection for huge number of particles in a quantum-electrodynamic regime. The authors acknowledge the financial support from the French National Research Agency (ANR) in the framework of ''The Investments for the Future'' programme IdEx Bordeaux - LAPHIA (ANR-10IDEX-03-02)-Project TULIMA.
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
Hierarchical dispatch using two-stage optimisation for electricity markets in smart grid
NASA Astrophysics Data System (ADS)
Yang, Jie; Zhang, Guoshan; Ma, Kai
2016-11-01
This paper proposes a hierarchical dispatch method for the electricity markets consisting of wholesale markets and retail markets. In the wholesale markets, the generators and the retailers decide the generation and the purchase according to the market-clearing price. In the retail markets, the retailers set the retail price to adjust the electricity consumption of the consumers. Due to the two-way communications in smart grid, the retailers can decide the electricity purchase from the wholesale markets based on the information on electricity usage of consumers in the retail markets. We establish the hierarchical dispatch model for the wholesale markets and the retail markets and develop distributed algorithms to search for the optimal generation, purchase, and consumption. Numerical results show the balance between the supply and demand, the profits of the retailers, and the convergence of the distributed algorithms.
Inter-satellite optical communications: from SILEX to next generation systems
NASA Astrophysics Data System (ADS)
Laurent, Bernard; Planche, Gilles; Michel, Cyril
2004-06-01
The continuous growth in data rate demand, the importance of real time commanding and real time access to the information for diverse civilian and military applications as well as the in-orbit demonstration of optical communication have led to boost the interest of such systems for future applications. After a presentation of the different fields of application and their associated performances requirements, this paper presents the possible optical link candidates. Then, the architecture, the design and the performances of new optical terminal generations, which profits from SILEX experience and the use of new technologies such as SiC and APS, are detailed. This new optimised generation, highly simplified with respect to SILEX terminals and dimensioned to offer higher data rate, presents attractive mass, volume and power characteristics compatible with a simple accommodation on the host vehicle.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.
Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D
2011-12-12
We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America
Insertion of operation-and-indicate instructions for optimized SIMD code
Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K
2013-06-04
Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.
Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process
NASA Technical Reports Server (NTRS)
McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.
1999-01-01
This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.
Mitchell, P; Korobelnik, J-F; Lanzetta, P; Holz, F G; Prünte, C; Schmidt-Erfurth, U; Tano, Y; Wolf, S
2010-01-01
Neovascular age-related macular degeneration (AMD) has a poor prognosis if left untreated, frequently resulting in legal blindness. Ranibizumab is approved for treating neovascular AMD. However, further guidance is needed to assist ophthalmologists in clinical practice to optimise treatment outcomes. An international retina expert panel assessed evidence available from prospective, multicentre studies evaluating different ranibizumab treatment schedules (ANCHOR, MARINA, PIER, SAILOR, SUSTAIN and EXCITE) and a literature search to generate evidence-based and consensus recommendations for treatment indication and assessment, retreatment and monitoring. Ranibizumab is indicated for choroidal neovascular lesions with active disease, the clinical parameters of which are outlined. Treatment initiation with three consecutive monthly injections, followed by continued monthly injections, has provided the best visual-acuity outcomes in pivotal clinical trials. If continued monthly injections are not feasible after initiation, a flexible strategy appears viable, with monthly monitoring of lesion activity recommended. Initiation regimens of fewer than three injections have not been assessed. Continuous careful monitoring with flexible retreatment may help avoid vision loss recurring. Standardised biomarkers need to be determined. Evidence-based guidelines will help to optimise treatment outcomes with ranibizumab in neovascular AMD.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Sluggett, Janet K; Ilomäki, Jenni; Seaman, Karla L; Corlis, Megan; Bell, J Simon
2017-02-01
Eight percent of Australians aged 65 years and over receive residential aged care each year. Residents are increasingly older, frailer and have complex care needs on entry to residential aged care. Up to 63% of Australian residents of aged care facilities take nine or more medications regularly. Together, these factors place residents at high risk of adverse drug events. This paper reviews medication-related policies, practices and research in Australian residential aged care. Complex processes underpin prescribing, supply and administration of medications in aged care facilities. A broad range of policies and resources are available to assist health professionals, aged care facilities and residents to optimise medication management. These include national guiding principles, a standardised national medication chart, clinical medication reviews and facility accreditation standards. Recent Australian interventions have improved medication use in residential aged care facilities. Generating evidence for prescribing and deprescribing that is specific to residential aged care, health workforce reform, medication-related quality indicators and inter-professional education in aged care are important steps toward optimising medication use in this setting. Copyright © 2016 Elsevier Ltd. All rights reserved.
Advanced data management for optimising the operation of a full-scale WWTP.
Beltrán, Sergio; Maiza, Mikel; de la Sota, Alejandro; Villanueva, José María; Ayesa, Eduardo
2012-01-01
The lack of appropriate data management tools is presently a limiting factor for a broader implementation and a more efficient use of sensors and analysers, monitoring systems and process controllers in wastewater treatment plants (WWTPs). This paper presents a technical solution for advanced data management of a full-scale WWTP. The solution is based on an efficient and intelligent use of the plant data by a standard centralisation of the heterogeneous data acquired from different sources, effective data processing to extract adequate information, and a straightforward connection to other emerging tools focused on the operational optimisation of the plant such as advanced monitoring and control or dynamic simulators. A pilot study of the advanced data manager tool was designed and implemented in the Galindo-Bilbao WWTP. The results of the pilot study showed its potential for agile and intelligent plant data management by generating new enriched information combining data from different plant sources, facilitating the connection of operational support systems, and developing automatic plots and trends of simulated results and actual data for plant performance and diagnosis.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
Rekadwad, Bhagwan N.; Khobragade, Chandrahasya N.
2016-01-01
Microbiologists are routinely engaged isolation, identification and comparison of isolated bacteria for their novelty. 16S rRNA sequences of Bacillus pumilus were retrieved from NCBI repository and generated QR codes for sequences (FASTA format and full Gene Bank information). 16SrRNA were used to generate quick response (QR) codes of Bacillus pumilus isolated from Lonar Crator Lake (19° 58′ N; 76° 31′ E), India. Bacillus pumilus 16S rRNA gene sequences were used to generate CGR, FCGR and PCA. These can be used for visual comparison and evaluation respectively. The hyperlinked QR codes, CGR, FCGR and PCA of all the isolates are made available to the users on a portal https://sites.google.com/site/bhagwanrekadwad/. This generated digital data helps to evaluate and compare any Bacillus pumilus strain, minimizes laboratory efforts and avoid misinterpretation of the species. PMID:27141529
The Integration of Environmental Constraints into Tidal Array Optimisation
NASA Astrophysics Data System (ADS)
du Feu, Roan; de Trafford, Sebastian; Culley, Dave; Hill, Jon; Funke, Simon W.; Kramer, Stephan C.; Piggott, Matthew D.
2015-04-01
It has been estimated by The Carbon Trust that the marine renewable energy sector, of which tidal stream turbines are projected to play a large part, could produce 20% of the UK's present electricity requirements. This has lead to the important question of how this technology can be deployed in an economically and environmentally friendly manner. Work is currently under way to understand how the tidal turbines that constitute an array can be arranged to maximise the total power generated by that array. The work presented here continues this through the inclusion of environmental constraints. The benefits of the renewable energy sector to our environment at large are not in question. However, the question remains as to the effects this burgeoning sector will have on local environments, and how to mitigate these effects if they are detrimental. For example, the presence of tidal arrays can, through altering current velocity, drastically change the sediment transport into and out of an area along with re-suspending existing sediment. This can have the effects of scouring or submerging habitat, mobilising contaminants within the existing sediment, reducing food supply and altering the turbidity of the water. All of which greatly impact upon any fauna in the affected region. This work pays particular attention to the destruction of habitat of benthic fauna, as this is quantifiable as a direct result of change in the current speed; a primary factor in determining sediment accumulation on the sea floor. OpenTidalFarm is an open source tool that maximises the power generated by an array through repositioning the turbines within it. It currently uses a 2D shallow water model with turbines represented as bump functions of increased friction. The functional of interest, power extracted by the array, is evaluated from the flow field which is calculated at each iteration using a finite element method. A gradient-based local optimisation is then used through solving the associated adjoint equations, and the turbines are repositioned accordingly. The use of local optimisation drastically reduces the number of iterations therefore allowing each iteration to be more expensive. This means that this technique can be readily applied to large arrays and also that there is enough leeway in computational cost that additional constraints or functionals can be introduced without the model becoming impractical to apply. The work presented here utilises OpenTidalFarm and incorporates into it ecological and sedimentological constraints that limit the extent to which the array can alter the current speed in specified locations. The addition of these constraints will likely affect the total power generated by the array, and this work details our first steps in investigating the trade off between the maximisation of power generation and the limitation of the array's impact upon its environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, S.; Kroposki, B.; Kramer, W.
Integrating renewable energy and distributed generations into the Smart Grid architecture requires power electronic (PE) for energy conversion. The key to reaching successful Smart Grid implementation is to develop interoperable, intelligent, and advanced PE technology that improves and accelerates the use of distributed energy resource systems. This report describes the simulation, design, and testing of a single-phase DC-to-AC inverter developed to operate in both islanded and utility-connected mode. It provides results on both the simulations and the experiments conducted, demonstrating the ability of the inverter to provide advanced control functions such as power flow and VAR/voltage regulation. This report alsomore » analyzes two different techniques used for digital signal processor (DSP) code generation. Initially, the DSP code was written in C programming language using Texas Instrument's Code Composer Studio. In a later stage of the research, the Simulink DSP toolbox was used to self-generate code for the DSP. The successful tests using Simulink self-generated DSP codes show promise for fast prototyping of PE controls.« less
ERIC Educational Resources Information Center
Mooij, Ton
2004-01-01
Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haslinger, K.H.
Tube-to-tube support interaction characterisitics were determined experimentally on a single tube, multi-span geometry, representative of the Westinghouse Model 51 steam generator economizer design. Results, in part, became input for an autoclave type wear test program on steam generator tubes, performed by Kraftwerk Union (KWU). More importantly, the test data reported here have been used to validate two analytical wear prediction codes; the WECAN code, which was developed by Westinghouse, and the ABAQUS code which has been enhanced for EPRI by Foster Wheeler to enable simulation of gap conditions (including fluid film effects) for various support geometries.
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
PCG: A prototype incremental compilation facility for the SAGA environment, appendix F
NASA Technical Reports Server (NTRS)
Kimball, Joseph John
1985-01-01
A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.
Fault-tolerant, high-level quantum circuits: form, compilation and description
NASA Astrophysics Data System (ADS)
Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.
2017-06-01
Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.
Scrape off layer modelling studies for SST-I
NASA Astrophysics Data System (ADS)
Warrier, M.; Jaishankar, S.; Deshpande, S.; Coster, D.; Schneider, R.; Chaturvedi, S.; Srinivasan, R.; Braams, B. J.; SST Team
SOL modelling results for SST-1 (SST Team, Proceedings of the 16th IEEE/NPSS Symposium on Fusion Engineering, Champaign, IL, vol. II, 1995, p. 481) show a sheath limited flow regime. This is due to the low edge densities required by lower hybrid current drive (LHCD), coupled with high power input per unit volume. Coupled plasma-neutral transport studies using B2-Eirene [R. Schneider et al., J. Nucl. Mater. 196-198 (1992) 810] show significantly high charge exchange losses and radiated power from the core. It also shows that the heat flux to the inner divertor is higher than that to the outer divertor due to thinner inner SOL widths. The Monte-Carlo neutral transport code DEGAS [D. Heifitz et al., J. Comput. Phys. 46 (1982) 309] was used to optimise the baffle plate geometry and it was seen that a configuration where the baffle plate shields the main plasma from the divertor strike point results in reduced backflow of neutrals. The divertor erosion code DIVER (M. Warrier et al., SST Divertor Modelling Report, 1996-1997) was used to predict a steady state operating temperature for the SST divertor plate lying in the range 750-1000°C for which the erosion will be minimum.
Some User's Insights Into ADIFOR 2.0D
NASA Technical Reports Server (NTRS)
Giesy, Daniel P.
2002-01-01
Some insights are given which were gained by one user through experience with the use of the ADIFOR 2.0D software for automatic differentiation of Fortran code. These insights are generally in the area of the user interface with the generated derivative code - particularly the actual form of the interface and the use of derivative objects, including "seed" matrices. Some remarks are given as to how to iterate application of ADIFOR in order to generate second derivative code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grote, D. P.
Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.
Distributed optimisation problem with communication delay and external disturbance
NASA Astrophysics Data System (ADS)
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
Medicines optimisation: priorities and challenges.
Kaufman, Gerri
2016-03-23
Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.
An Analysis of Elliptic Grid Generation Techniques Using an Implicit Euler Solver.
1986-06-09
automatic determination of the control fu.nction, . elements of covariant metric tensor in the elliptic grid generation system , from the Cm = 1,2,3...computational fluid d’nan1-cs code. Tne code Inclues a tnree-dimensional current research is aimed primaril: at algebraic generation system based on transfinite...start the iterative solution of the f. ow, nea, transfer, and combustion proble:s. elliptic generation system . Tn13 feature also .:ven-.ts :.t be made
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Rapid 3D bioprinting from medical images: an application to bone scaffolding
NASA Astrophysics Data System (ADS)
Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.
2018-03-01
Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Shared prefetching to reduce execution skew in multi-threaded systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichenberger, Alexandre E; Gunnels, John A
Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated basedmore » on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads.« less
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Preliminary Analysis of the Transient Reactor Test Facility (TREAT) with PROTEUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connaway, H. M.; Lee, C. H.
The neutron transport code PROTEUS has been used to perform preliminary simulations of the Transient Reactor Test Facility (TREAT). TREAT is an experimental reactor designed for the testing of nuclear fuels and other materials under transient conditions. It operated from 1959 to 1994, when it was placed on non-operational standby. The restart of TREAT to support the U.S. Department of Energy’s resumption of transient testing is currently underway. Both single assembly and assembly-homogenized full core models have been evaluated. Simulations were performed using a historic set of WIMS-ANL-generated cross-sections as well as a new set of Serpent-generated cross-sections. To supportmore » this work, further analyses were also performed using additional codes in order to investigate particular aspects of TREAT modeling. DIF3D and the Monte-Carlo codes MCNP and Serpent were utilized in these studies. MCNP and Serpent were used to evaluate the effect of geometry homogenization on the simulation results and to support code-to-code comparisons. New meshes for the PROTEUS simulations were created using the CUBIT toolkit, with additional meshes generated via conversion of selected DIF3D models to support code-to-code verifications. All current analyses have focused on code-to-code verifications, with additional verification and validation studies planned. The analysis of TREAT with PROTEUS-SN is an ongoing project. This report documents the studies that have been performed thus far, and highlights key challenges to address in future work.« less
The Italian experience on T/H best estimate codes: Achievements and perspectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alemberti, A.; D`Auria, F.; Fiorino, E.
1997-07-01
Themalhydraulic system codes are complex tools developed to simulate the power plants behavior during off-normal conditions. Among the objectives of the code calculations the evaluation of safety margins, the operator training, the optimization of the plant design and of the emergency operating procedures, are mostly considered in the field of the nuclear safety. The first generation of codes was developed in the United States at the end of `60s. Since that time, different research groups all over the world started the development of their own codes. At the beginning of the `80s, the second generation codes were proposed; these differmore » from the first generation codes owing to the number of balance equations solved (six instead of three), the sophistication of the constitutive models and of the adopted numerics. The capabilities of available computers have been fully exploited during the years. The authors then summarize some of the major steps in the process of developing, modifying, and advancing the capabilities of the codes. They touch on the fact that Italian, and for that matter non-American, researchers have not been intimately involved in much of this work. They then describe the application of these codes in Italy, even though there are no operating or under construction nuclear power plants at this time. Much of this effort is directed at the general question of plant safety in the face of transient type events.« less
Development of a Gas Dynamic and Thermodynamic Simulation Model of the Lontra Blade Compressor™
NASA Astrophysics Data System (ADS)
Karlovsky, Jerome
2015-08-01
The Lontra Blade Compressor™ is a patented double acting, internally compressing, positive displacement rotary compressor of innovative design. The Blade Compressor is in production for waste-water treatment, and will soon be launched for a range of applications at higher pressure ratios. In order to aid the design and development process, a thermodynamic and gas dynamic simulation program has been written in house. The software has been successfully used to optimise geometries and running conditions of current designs, and is also being used to evaluate future designs for different applications and markets. The simulation code has three main elements. A positive displacement chamber model, a leakage model and a gas dynamic model to simulate gas flow through ports and to track pressure waves in the inlet and outlet pipes. All three of these models are interlinked in order to track mass and energy flows within the system. A correlation study has been carried out to verify the software. The main correlation markers used were mass flow, chamber pressure, pressure wave tracking in the outlet pipe, and volumetric efficiency. It will be shown that excellent correlation has been achieved between measured and simulated data. Mass flow predictions were to within 2% of measured data, and the timings and magnitudes of all major gas dynamic effects were well replicated. The simulation will be further developed in the near future to help with the optimisation of exhaust and inlet silencers.
FY16 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Shemon, E. R.; Smith, M. A.
2016-09-30
The goal of the NEAMS neutronics effort is to develop a neutronics toolkit for use on sodium-cooled fast reactors (SFRs) which can be extended to other reactor types. The neutronics toolkit includes the high-fidelity deterministic neutron transport code PROTEUS and many supporting tools such as a cross section generation code MC 2-3, a cross section library generation code, alternative cross section generation tools, mesh generation and conversion utilities, and an automated regression test tool. The FY16 effort for NEAMS neutronics focused on supporting the release of the SHARP toolkit and existing and new users, continuing to develop PROTEUS functions necessarymore » for performance improvement as well as the SHARP release, verifying PROTEUS against available existing benchmark problems, and developing new benchmark problems as needed. The FY16 research effort was focused on further updates of PROTEUS-SN and PROTEUS-MOCEX and cross section generation capabilities as needed.« less
A methodology for the optimisation of a mm-wave scanner
NASA Astrophysics Data System (ADS)
Stec, L. Zoë; Podd, Frank J. W.; Peyton, Anthony J.
2016-10-01
The need to detect non-metallic items under clothes to prevent terrorism at transport hubs is becoming vital. Millimetre wave technology is able to penetrate clothing, yet able to interact with objects concealed underneath. This paper considers active illumination using multiple transmitter and receiver antennas. The positioning of these antennas must achieve full body coverage, whilst minimising the number of antenna elements and the number of required measurements. It sets out a rapid simulation methodology, based on the Kirchhoff equations, to explore different scenarios for scanner architecture optimisation. The paper assumes that the electromagnetic waves used are at lower frequencies (say, 10-30 GHz) where the body temperature does not need to be considered. This range allows better penetration of clothing than higher frequencies, yet still provides adequate resolution. Since passengers vary greatly in shape and size, the system needs to be able to work well with a range of body morphologies. Thus we have used two very differently shaped avatars to test the portal simulations. This simulation tool allows many different avatars to be generated quickly. Findings from these simulations indicated that the dimensions of the avatar did indeed have an effect on the pattern of illumination, and that the data for each antenna pair can easily be combined to compare different antenna geometries for a given portal architecture, resulting in useful insights into antenna placement. The data generated could be analysed both quantitatively and qualitatively, at various levels of scale.
NASA Astrophysics Data System (ADS)
Seiller, G.; Anctil, F.; Roy, R.
2017-09-01
This paper outlines the design and experimentation of an Empirical Multistructure Framework (EMF) for lumped conceptual hydrological modeling. This concept is inspired from modular frameworks, empirical model development, and multimodel applications, and encompasses the overproduce and select paradigm. The EMF concept aims to reduce subjectivity in conceptual hydrological modeling practice and includes model selection in the optimisation steps, reducing initial assumptions on the prior perception of the dominant rainfall-runoff transformation processes. EMF generates thousands of new modeling options from, for now, twelve parent models that share their functional components and parameters. Optimisation resorts to ensemble calibration, ranking and selection of individual child time series based on optimal bias and reliability trade-offs, as well as accuracy and sharpness improvement of the ensemble. Results on 37 snow-dominated Canadian catchments and 20 climatically-diversified American catchments reveal the excellent potential of the EMF in generating new individual model alternatives, with high respective performance values, that may be pooled efficiently into ensembles of seven to sixty constitutive members, with low bias and high accuracy, sharpness, and reliability. A group of 1446 new models is highlighted to offer good potential on other catchments or applications, based on their individual and collective interests. An analysis of the preferred functional components reveals the importance of the production and total flow elements. Overall, results from this research confirm the added value of ensemble and flexible approaches for hydrological applications, especially in uncertain contexts, and open up new modeling possibilities.
High density plasmas and new diagnostics: An overview (invited).
Celona, L; Gammino, S; Mascali, D
2016-02-01
One of the limiting factors for the full understanding of Electron Cyclotron Resonance Ion Sources (ECRISs) fundamental mechanisms consists of few types of diagnostic tools so far available for such compact machines. Microwave-to-plasma coupling optimisation, new methods of density overboost provided by plasma wave generation, and magnetostatic field tailoring for generating a proper electron energy distribution function, suitable for optimal ion beams formation, require diagnostic tools spanning across the entire electromagnetic spectrum from microwave interferometry to X-ray spectroscopy; these methods are going to be implemented including high resolution and spatially resolved X-ray spectroscopy made by quasi-optical methods (pin-hole cameras). The ion confinement optimisation also requires a complete control of cold electrons displacement, which can be performed by optical emission spectroscopy. Several diagnostic tools have been recently developed at INFN-LNS, including "volume-integrated" X-ray spectroscopy in low energy domain (2-30 keV, by using silicon drift detectors) or high energy regime (>30 keV, by using high purity germanium detectors). For the direct detection of the spatially resolved spectral distribution of X-rays produced by the electronic motion, a "pin-hole camera" has been developed also taking profit from previous experiences in the ECRIS field. The paper will give an overview of INFN-LNS strategy in terms of new microwave-to-plasma coupling schemes and advanced diagnostics supporting the design of new ion sources and for optimizing the performances of the existing ones, with the goal of a microwave-absorption oriented design of future machines.
Application of Mössbauer spectroscopy on corrosion products of NPP
NASA Astrophysics Data System (ADS)
Dekan, J.; Lipka, J.; Slugeň, V.
2013-04-01
Steam generator (SG) is generally one of the most important components at all nuclear power plants (NPP) with close impact to safe and long-term operation. Material degradation and corrosion/erosion processes are serious risks for long-term reliable operation. Steam generators of four VVER-440 units at nuclear power plants V-1 and V-2 in Jaslovske Bohunice (Slovakia) were gradually changed by new original "Bohunice" design in period 1994-1998, in order to improve corrosion resistance of SGs. Corrosion processes before and after these design and material changes in Bohunice secondary circuit were studied using Mössbauer spectroscopy during last 25 years. Innovations in the feed water pipeline design as well as material composition improvements were evaluated positively. Mössbauer spectroscopy studies of phase composition of corrosion products were performed on real specimens scrapped from water pipelines or in form of filters deposits. Newest results in our long-term corrosion study confirm good operational experiences and suitable chemical regimes (reduction environment) which results mostly in creation of magnetite (on the level 70 % or higher) and small portions of hematite, goethite or hydrooxides. Regular observation of corrosion/erosion processes is essential for keeping NPP operation on high safety level. The output from performed material analyses influences the optimisation of operating chemical regimes and it can be used in optimisation of regimes at decontamination and passivation of pipelines or secondary circuit components. It can be concluded that a longer passivation time leads more to magnetite fraction in the corrosion products composition.
The World in a Tomato: Revisiting the Use of "Codes" in Freire's Problem-Posing Education.
ERIC Educational Resources Information Center
Barndt, Deborah
1998-01-01
Gives examples of the use of Freire's notion of codes or generative themes in problem-posing literacy education. Describes how these applications expand Freire's conceptions by involving students in code production, including multicultural perspectives, and rethinking codes as representations. (SK)
NASA Astrophysics Data System (ADS)
Hill, Ian; White, Toby; Owen, Sarah
2014-05-01
Extraction and processing of rock materials to produce aggregates is carried out at some 20,000 quarries across the EU. All stages of the processing and transport of hard and dense materials inevitably consume high levels of energy and have consequent significant carbon footprints. The FP7 project "the Energy Efficient Quarry" (EE-Quarry) has been addressing this problem and has devised strategies, supported by modelling software, to assist the quarrying industry to assess and optimise its energy use, and to minimise its carbon footprint. Aggregate quarries across Europe vary enormously in the scale of the quarrying operations, the nature of the worked mineral, and the processing to produce a final market product. Nevertheless most quarries involve most or all of a series of essential stages; deposit assessment, drilling and blasting, loading and hauling, and crushing and screening. The process of determining the energy-efficiency of each stage is complex, but is broadly understood in principle and there are numerous sources of information and guidance available in the literature and on-line. More complex still is the interaction between each of these stages. For example, using a little more energy in blasting to increase fragmentation may save much greater energy in later crushing and screening, but also generate more fines material which is discarded as waste and the embedded energy in this material is lost. Thus the calculation of the embedded energy in the waste material becomes an input to the determination of the blasting strategy. Such feedback loops abound in the overall quarry optimisation. The project has involved research and demonstration operations at a number of quarries distributed across Europe carried out by all partners in the EE-Quarry project, working in collaboration with many of the major quarrying companies operating in the EU. The EE-Quarry project is developing a sophisticated modelling tool, the "EE-Quarry Model" available to the quarrying industry on a web-based platform. This tool guides quarry managers and operators through the complex, multi-layered, iterative, process of assessing the energy efficiency of their own quarry operation. They are able to evaluate the optimisation of the energy-efficiency of the overall quarry through examining both the individual stages of processing, and the interactions between them. The project is also developing on-line distance learning modules designed for Continuous Professional Development (CPD) activities for staff across the quarrying industry in the EU and beyond. The presentation will describe development of the model, and the format and scope of the resulting software tool and its user-support available to the quarrying industry.
DOT National Transportation Integrated Search
2006-07-01
This report describes the development of a new coding scheme to classify potentially distracting secondary tasks performed while driving, such as eating and using a cell phone. Compared with prior schemes (Stutts et al., first-generation UMTRI scheme...
The low-cost microwave plasma sources for science and industry applications
NASA Astrophysics Data System (ADS)
Tikhonov, V. N.; Aleshin, S. N.; Ivanov, I. A.; Tikhonov, A. V.
2017-11-01
Microwave plasma torches proposed in the world market are built according to a scheme that can be called classical: power supply - magnetron head - microwave isolator with water load - reflected power meter - matching device - actual plasma torch - sliding short circuit. The total cost of devices from this list with a microwave generator of 3 kW in the performance, for example, of SAIREM (France), is about 17,000 €. We have changed the classical scheme of the microwave plasmathrone and optimised design of the waveguide channel. As a result, we can supply simple and reliable sources of microwave plasma (complete with our low-budget microwave generator up to 3 kW and a simple plasmathrone of atmospheric pressure) at a price from 3,000 €.
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.
Recent Developments in Grid Generation and Force Integration Technology for Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; VanDalsem, William R. (Technical Monitor)
1994-01-01
Recent developments in algorithms and software tools for generating overset grids for complex configurations are described. These include the overset surface grid generation code SURGRD and version 2.0 of the hyperbolic volume grid generation code HYPGEN. The SURGRD code is in beta test mode where the new features include the capability to march over a collection of panel networks, a variety of ways to control the side boundaries and the marching step sizes and distance, a more robust projection scheme and an interpolation option. New features in version 2.0 of HYPGEN include a wider range of boundary condition types. The code also allows the user to specify different marching step sizes and distance for each point on the surface grid. A scheme that takes into account of the overlapped zones on the body surface for the purpose of forces and moments computation is also briefly described, The process involves the following two software modules: MIXSUR - a composite grid generation module to produce a collection of quadrilaterals and triangles on which pressure and viscous stresses are to be integrated, and OVERINT - a forces and moments integration module.
Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
Dual Coding, Reasoning and Fallacies.
ERIC Educational Resources Information Center
Hample, Dale
1982-01-01
Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)
Coding Issues in Grounded Theory
ERIC Educational Resources Information Center
Moghaddam, Alireza
2006-01-01
This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…
Method for rapid high-frequency seismogram calculation
NASA Astrophysics Data System (ADS)
Stabile, Tony Alfredo; De Matteis, Raffaella; Zollo, Aldo
2009-02-01
We present a method for rapid, high-frequency seismogram calculation that makes use of an algorithm to automatically generate an exhaustive set of seismic phases with an appreciable amplitude on the seismogram. The method uses a hierarchical order of ray and seismic-phase generation, taking into account some existing constraints for ray paths and some physical constraints. To compute synthetic seismograms, the COMRAD code (from the Italian: "COdice Multifase per il RAy-tracing Dinamico") uses as core a dynamic ray-tracing code. To validate the code, we have computed in a layered medium synthetic seismograms using both COMRAD and a code that computes the complete wave field by the discrete wave number method. The seismograms are compared according to a time-frequency misfit criteria based on the continuous wavelet transform of the signals. Although the number of phases is considerably reduced by the selection criteria, the results show that the loss in amplitude on the whole seismogram is negligible. Moreover, the time for the computing of the synthetics using the COMRAD code (truncating the ray series at the 10th generation) is 3-4-fold less than that needed for the AXITRA code (up to a frequency of 25 Hz).
NASA Astrophysics Data System (ADS)
Sobolev, V.; Uyttenhove, W.; Thetford, R.; Maschek, W.
2011-07-01
The neutronic and thermomechanical performances of two composite fuel systems: CERCER with (Pu,Np,Am,Cm)O 2-x fuel particles in ceramic MgO matrix and CERMET with metallic Mo matrix, selected for transmutation of minor actinides in the European Facility for Industrial Transmutation (EFIT), were analysed aiming at their optimisation. The ALEPH burnup code system, based on MNCPX and ORIGEN codes and JEFF3.1 nuclear data library, and the modern version of the fuel rod performance code TRAFIC were used for this analysis. Because experimental data on the properties of the mixed minor-actinide oxides are scarce, and the in-reactor behaviour of the T91 steel chosen as cladding, as well as of the corrosion protective layer, is still not well-known, a set of "best estimates" provided the properties used in the code. The obtained results indicate that both fuel candidates, CERCER and CERMET, can satisfy the fuel design and safety criteria of EFIT. The residence time for both types of fuel elements can reach about 5 years with the reactivity swing within ±1000 pcm, and about 22% of the loaded MA is transmuted during this period. However, the fuel centreline temperature in the hottest CERCER fuel rod is close to the temperature above which MgO matrix becomes chemically instable. Moreover, a weak PCMI can appear in about 3 years of operation. The CERMET fuel can provide larger safety margins: the fuel temperature is more than 1000 K below the permitted level of 2380 K and the pellet-cladding gap remains open until the end of operation.
New PAH gene promoter KLF1 and 3'-region C/EBPalpha motifs influence transcription in vitro.
Klaassen, Kristel; Stankovic, Biljana; Kotur, Nikola; Djordjevic, Maja; Zukic, Branka; Nikcevic, Gordana; Ugrin, Milena; Spasovski, Vesna; Srzentic, Sanja; Pavlovic, Sonja; Stojiljkovic, Maja
2017-02-01
Phenylketonuria (PKU) is a metabolic disease caused by mutations in the phenylalanine hydroxylase (PAH) gene. Although the PAH genotype remains the main determinant of PKU phenotype severity, genotype-phenotype inconsistencies have been reported. In this study, we focused on unanalysed sequences in non-coding PAH gene regions to assess their possible influence on the PKU phenotype. We transiently transfected HepG2 cells with various chloramphenicol acetyl transferase (CAT) reporter constructs which included PAH gene non-coding regions. Selected non-coding regions were indicated by in silico prediction to contain transcription factor binding sites. Furthermore, electrophoretic mobility shift assay (EMSA) and supershift assays were performed to identify which transcriptional factors were engaged in the interaction. We found novel KLF1 motif in the PAH promoter, which decreases CAT activity by 50 % in comparison to basal transcription in vitro. The cytosine at the c.-170 promoter position creates an additional binding site for the protein complex involving KLF1 transcription factor. Moreover, we assessed for the first time the role of a multivariant variable number tandem repeat (VNTR) region located in the 3'-region of the PAH gene. We found that the VNTR3, VNTR7 and VNTR8 constructs had approximately 60 % of CAT activity. The regulation is mediated by the C/EBPalpha transcription factor, present in protein complex binding to VNTR3. Our study highlighted two novel promoter KLF1 and 3'-region C/EBPalpha motifs in the PAH gene which decrease transcription in vitro and, thus, could be considered as PAH expression modifiers. New transcription motifs in non-coding regions will contribute to better understanding of the PKU phenotype complexity and may become important for the optimisation of PKU treatment.
Gómez-Romano, Fernando; Villanueva, Beatriz; Fernández, Jesús; Woolliams, John A; Pong-Wong, Ricardo
2016-01-13
Optimal contribution methods have proved to be very efficient for controlling the rates at which coancestry and inbreeding increase and therefore, for maintaining genetic diversity. These methods have usually relied on pedigree information for estimating genetic relationships between animals. However, with the large amount of genomic information now available such as high-density single nucleotide polymorphism (SNP) chips that contain thousands of SNPs, it becomes possible to calculate more accurate estimates of relationships and to target specific regions in the genome where there is a particular interest in maximising genetic diversity. The objective of this study was to investigate the effectiveness of using genomic coancestry matrices for: (1) minimising the loss of genetic variability at specific genomic regions while restricting the overall loss in the rest of the genome; or (2) maximising the overall genetic diversity while restricting the loss of diversity at specific genomic regions. Our study shows that the use of genomic coancestry was very successful at minimising the loss of diversity and outperformed the use of pedigree-based coancestry (genetic diversity even increased in some scenarios). The results also show that genomic information allows a targeted optimisation to maintain diversity at specific genomic regions, whether they are linked or not. The level of variability maintained increased when the targeted regions were closely linked. However, such targeted management leads to an important loss of diversity in the rest of the genome and, thus, it is necessary to take further actions to constrain this loss. Optimal contribution methods also proved to be effective at restricting the loss of diversity in the rest of the genome, although the resulting rate of coancestry was higher than the constraint imposed. The use of genomic matrices when optimising contributions permits the control of genetic diversity and inbreeding at specific regions of the genome through the minimisation of partial genomic coancestry matrices. The formula used to predict coancestry in the next generation produces biased results and therefore it is necessary to refine the theory of genetic contributions when genomic matrices are used to optimise contributions.
Algorithme intelligent d'optimisation d'un design structurel de grande envergure
NASA Astrophysics Data System (ADS)
Dominique, Stephane
The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.
NASA Astrophysics Data System (ADS)
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Blas, Alfredo; Tapia, Carlos; Riego, Albert
pGamma is a code developed by the NERG group of the Technical University of Catalonia - Barcelona Tech for the analysis of gamma spectra generated by the Equipment for the Continuous Measurement and Identification of Gamma Radioactivity on Aerosols with Paper Filter developed for our group and Raditel Servies company. Nowadays the code is in the process of adaptation for the monitors of the Environmental Radiological Surveillance Network of the Local Government of Catalonia (Generalitat of Catalonia), Spain. The code is a Spectrum Analysis System, it identifies the gamma emitters on the spectrum, determines its Concentration of Activity, generates alarmsmore » depending on the Activity of the emitters and generates a report. The Spectrum Analysis System includes a library with emitters of interest, NORM and artificial. The code is being used on the three stations with the aerosol monitor of the Network (Asco and Vandellos, near both Nuclear Power Plants and Barcelona). (authors)« less
Engqvist, Martin K M; Nielsen, Jens
2015-08-21
The Ambiguous Nucleotide Tool (ANT) is a desktop application that generates and evaluates degenerate codons. Degenerate codons are used to represent DNA positions that have multiple possible nucleotide alternatives. This is useful for protein engineering and directed evolution, where primers specified with degenerate codons are used as a basis for generating libraries of protein sequences. ANT is intuitive and can be used in a graphical user interface or by interacting with the code through a defined application programming interface. ANT comes with full support for nonstandard, user-defined, or expanded genetic codes (translation tables), which is important because synthetic biology is being applied to an ever widening range of natural and engineered organisms. The Python source code for ANT is freely distributed so that it may be used without restriction, modified, and incorporated in other software or custom data pipelines.
ASME Code Efforts Supporting HTGRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.K. Morton
2010-09-01
In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This reportmore » discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.« less
ASME Code Efforts Supporting HTGRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.K. Morton
2011-09-01
In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This reportmore » discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.« less
An overview of new video coding tools under consideration for VP10: the successor to VP9
NASA Astrophysics Data System (ADS)
Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu
2015-09-01
Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.
ASME Code Efforts Supporting HTGRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.K. Morton
2012-09-01
In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This reportmore » discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.« less
NASA Astrophysics Data System (ADS)
Mariani, A.; Passard, C.; Jallu, F.; Toubon, H.
2003-11-01
The design of a specific nuclear assay system for a dedicated application begins with a phase of development, which relies on information from the literature or on knowledge resulting from experience, and on specific experimental verifications. The latter ones may require experimental devices which can be restricting in terms of deadline, cost and safety. One way generally chosen to bypass these difficulties is to use simulation codes to study particular aspects. This paper deals with the potentialities offered by the simulation in the case of a passive-active neutron (PAN) assay system for alpha low level waste characterization; this system has been carried out at the Nuclear Measurements Development Laboratory of the French Atomic Energy Commission. Due to the high number of parameters to be taken into account for its development, this is a particularly sophisticated example. Since the PAN assay system, called PROMETHEE (prompt epithermal and thermal interrogation experiment), must have a detection efficiency of more than 20% and preserve a high level of modularity for various applications, an improved version has been studied using the MCNP4 (Monte Carlo N-Particle) transport code. Parameters such as the dimensions of the assay system, of the cavity and of the detection blocks, and the thicknesses of the nuclear materials of neutronic interest have been optimised. Therefore, the number of necessary experiments was reduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
Coding for Single-Line Transmission
NASA Technical Reports Server (NTRS)
Madison, L. G.
1983-01-01
Digital transmission code combines data and clock signals into single waveform. MADCODE needs four standard integrated circuits in generator and converter plus five small discrete components. MADCODE allows simple coding and decoding for transmission of digital signals over single line.
Subjective evaluation of next-generation video compression algorithms: a case study
NASA Astrophysics Data System (ADS)
De Simone, Francesca; Goldmann, Lutz; Lee, Jong-Seok; Ebrahimi, Touradj; Baroncini, Vittorio
2010-08-01
This paper describes the details and the results of the subjective quality evaluation performed at EPFL, as a contribution to the effort of the Joint Collaborative Team on Video Coding (JCT-VC) for the definition of the next-generation video coding standard. The performance of 27 coding technologies have been evaluated with respect to two H.264/MPEG-4 AVC anchors, considering high definition (HD) test material. The test campaign involved a total of 494 naive observers and took place over a period of four weeks. While similar tests have been conducted as part of the standardization process of previous video coding technologies, the test campaign described in this paper is by far the most extensive in the history of video coding standardization. The obtained subjective quality scores show high consistency and support an accurate comparison of the performance of the different coding solutions.
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2018-07-01
The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.
Life in the fast lane: high-throughput chemistry for lead generation and optimisation.
Hunter, D
2001-01-01
The pharmaceutical industry has come under increasing pressure due to regulatory restrictions on the marketing and pricing of drugs, competition, and the escalating costs of developing new drugs. These forces can be addressed by the identification of novel targets, reductions in the development time of new drugs, and increased productivity. Emphasis has been placed on identifying and validating new targets and on lead generation: the response from industry has been very evident in genomics and high throughput screening, where new technologies have been applied, usually coupled with a high degree of automation. The combination of numerous new potential biological targets and the ability to screen large numbers of compounds against many of these targets has generated the need for large diverse compound collections. To address this requirement, high-throughput chemistry has become an integral part of the drug discovery process. Copyright 2002 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Rahmani, Kianoosh; Kavousifard, Farzaneh; Abbasi, Alireza
2017-09-01
This article proposes a novel probabilistic Distribution Feeder Reconfiguration (DFR) based method to consider the uncertainty impacts into account with high accuracy. In order to achieve the set aim, different scenarios are generated to demonstrate the degree of uncertainty in the investigated elements which are known as the active and reactive load consumption and the active power generation of the wind power units. Notably, a normal Probability Density Function (PDF) based on the desired accuracy is divided into several class intervals for each uncertain parameter. Besides, the Weiball PDF is utilised for modelling wind generators and taking the variation impacts of the power production in wind generators. The proposed problem is solved based on Fuzzy Adaptive Modified Particle Swarm Optimisation to find the most optimal switching scheme during the Multi-objective DFR. Moreover, this paper holds two suggestions known as new mutation methods to adjust the inertia weight of PSO by the fuzzy rules to enhance its ability in global searching within the entire search space.
A Method for Decentralised Optimisation in Networks
NASA Astrophysics Data System (ADS)
Saramäki, Jari
2005-06-01
We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.
Thermal buckling optimisation of composite plates using firefly algorithm
NASA Astrophysics Data System (ADS)
Kamarian, S.; Shakeri, M.; Yas, M. H.
2017-07-01
Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.
Distributed convex optimisation with event-triggered communication in networked systems
NASA Astrophysics Data System (ADS)
Liu, Jiayun; Chen, Weisheng
2016-12-01
This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.
ERIC Educational Resources Information Center
Mayer, Richard E.; Sims, Valerie K.
1994-01-01
In 2 experiments, 162 high- and low-spatial ability students viewed a computer-generated animation and heard a concurrent or successive explanation. The concurrent group generated more creative solutions to transfer problems and demonstrated a contiguity effect consistent with dual-coding theory. (SLD)
Size principle and information theory.
Senn, W; Wyler, K; Clamann, H P; Kleinle, J; Lüscher, H R; Müller, L
1997-01-01
The motor units of a skeletal muscle may be recruited according to different strategies. From all possible recruitment strategies nature selected the simplest one: in most actions of vertebrate skeletal muscles the recruitment of its motor units is by increasing size. This so-called size principle permits a high precision in muscle force generation since small muscle forces are produced exclusively by small motor units. Larger motor units are activated only if the total muscle force has already reached certain critical levels. We show that this recruitment by size is not only optimal in precision but also optimal in an information theoretical sense. We consider the motoneuron pool as an encoder generating a parallel binary code from a common input to that pool. The generated motoneuron code is sent down through the motoneuron axons to the muscle. We establish that an optimization of this motoneuron code with respect to its information content is equivalent to the recruitment of motor units by size. Moreover, maximal information content of the motoneuron code is equivalent to a minimal expected error in muscle force generation.
Table-driven software architecture for a stitching system
NASA Technical Reports Server (NTRS)
Thrash, Patrick J. (Inventor); Miller, Jeffrey L. (Inventor); Pallas, Ken (Inventor); Trank, Robert C. (Inventor); Fox, Rhoda (Inventor); Korte, Mike (Inventor); Codos, Richard (Inventor); Korolev, Alexandre (Inventor); Collan, William (Inventor)
2001-01-01
Native code for a CNC stitching machine is generated by generating a geometry model of a preform; generating tool paths from the geometry model, the tool paths including stitching instructions for making stitches; and generating additional instructions indicating thickness values. The thickness values are obtained from a lookup table. When the stitching machine runs the native code, it accesses a lookup table to determine a thread tension value corresponding to the thickness value. The stitching machine accesses another lookup table to determine a thread path geometry value corresponding to the thickness value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MORIDIS, GEORGE
2016-05-02
MeshMaker v1.5 is a code that describes the system geometry and discretizes the domain in problems of flow and transport through porous and fractured media that are simulated using the TOUGH+ [Moridis and Pruess, 2014] or TOUGH2 [Pruess et al., 1999; 2012] families of codes. It is a significantly modified and drastically enhanced version of an earlier simpler facility that was embedded in the TOUGH2 codes [Pruess et al., 1999; 2012], from which it could not be separated. The code (MeshMaker.f90) is a stand-alone product written in FORTRAN 95/2003, is written according to the tenets of Object-Oriented Programming, has amore » modular structure and can perform a number of mesh generation and processing operations. It can generate two-dimensional radially symmetric (r,z) meshes, and one-, two-, and three-dimensional rectilinear (Cartesian) grids in (x,y,z). The code generates the file MESH, which includes all the elements and connections that describe the discretized simulation domain and conforming to the requirements of the TOUGH+ and TOUGH2 codes. Multiple-porosity processing for simulation of flow in naturally fractured reservoirs can be invoked by means of a keyword MINC, which stands for Multiple INteracting Continua. The MINC process operates on the data of the primary (porous medium) mesh as provided on disk file MESH, and generates a secondary mesh containing fracture and matrix elements with identical data formats on file MINC.« less
A universal preconditioner for simulating condensed phase materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Packwood, David; Ortner, Christoph, E-mail: c.ortner@warwick.ac.uk; Kermode, James, E-mail: j.r.kermode@warwick.ac.uk
2016-04-28
We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor ofmore » two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.« less
Kwag, Jeehyun; Jang, Hyun Jae; Kim, Mincheol; Lee, Sujeong
2014-01-01
Rate and phase codes are believed to be important in neural information processing. Hippocampal place cells provide a good example where both coding schemes coexist during spatial information processing. Spike rate increases in the place field, whereas spike phase precesses relative to the ongoing theta oscillation. However, what intrinsic mechanism allows for a single neuron to generate spike output patterns that contain both neural codes is unknown. Using dynamic clamp, we simulate an in vivo-like subthreshold dynamics of place cells to in vitro CA1 pyramidal neurons to establish an in vitro model of spike phase precession. Using this in vitro model, we show that membrane potential oscillation (MPO) dynamics is important in the emergence of spike phase codes: blocking the slowly activating, non-inactivating K+ current (IM), which is known to control subthreshold MPO, disrupts MPO and abolishes spike phase precession. We verify the importance of adaptive IM in the generation of phase codes using both an adaptive integrate-and-fire and a Hodgkin–Huxley (HH) neuron model. Especially, using the HH model, we further show that it is the perisomatically located IM with slow activation kinetics that is crucial for the generation of phase codes. These results suggest an important functional role of IM in single neuron computation, where IM serves as an intrinsic mechanism allowing for dual rate and phase coding in single neurons. PMID:25100320
Standing your Ground to Exoribonucleases: Function of Flavivirus Long Non-coding RNAs
Charley, Phillida A.; Wilusz, Jeffrey
2015-01-01
Members of the Flaviviridae (e.g. Dengue virus, West Nile virus, and Hepatitis C virus) contain a positive-sense RNA genome that encodes a large polyprotein. It is now also clear most if not all of these viruses also produce an abundant subgenomic long non-coding RNA. These non-coding RNAs, which are called subgenomicflavivirus RNAs (sfRNAs) or Xrn1-resistant RNAs (xrRNAs), are stable decay intermediates generated from the viral genomic RNA through the stalling of the cellular exoribonuclease Xrn1 at highly structured regions. Several functions of these flavivirus long non-coding RNAs have been revealed in recent years. The generation of these sfRNAs/xrRNAs from viral transcripts results in the repression of Xrn1 and the dysregulation of cellular mRNA stability. The abundant sfRNAs also serve directly as a decoy for important cellular protein regulators of the interferon and RNA interference antiviral pathways. Thus the generation of long non-coding RNAs from flaviviruses, hepaciviruses and pestiviruses likely disrupts aspects of innate immunity and may directly contribute to viral replication, cytopathology and pathogenesis. PMID:26368052
Blaschke, V; Brauns, B; Khaladj, N; Schmidt, C; Emmert, S
2018-02-27
Hospital revenues generated by diagnosis-related groups (DRGs) are in part dependent on the coding of secondary diagnoses. Therefore, more and more hospitals trust specialized coders with this task, thereby relieving doctors from time-consuming administrative burdens and establishing a highly professionalized coding environment. However, it is vastly unknown if the revenues generated by the coders do indeed exceed their incurred costs. Coding data from the departments of dermatology, ophthalmology, and infectious diseases from Rostock University Hospital from 2007-2016 were analyzed for the effects of secondary diagnoses on the resulting DRG, i. e., hospital charges. Ophthalmological case were highly resistant to the addition of secondary diagnoses. In contrast, adding secondary diagnoses to cases from infectious diseases resulted in 15% higher revenues. Although dermatological and infectious cases share the same sensitivity to secondary diagnoses, higher revenues could only rarely be realized in dermatology, probably owing to a younger, less multimorbid patient population. Except for ophthalmology, trusting specialized coders with clinical coding generates additional revenues through the coding of secondary diagnoses which exceed the costs for employing these coders.
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
NASA Technical Reports Server (NTRS)
Valley, Lois
1989-01-01
The SPS product, Classic-Ada, is a software tool that supports object-oriented Ada programming with powerful inheritance and dynamic binding. Object Oriented Design (OOD) is an easy, natural development paradigm, but it is not supported by Ada. Following the DOD Ada mandate, SPS developed Classic-Ada to provide a tool which supports OOD and implements code in Ada. It consists of a design language, a code generator and a toolset. As a design language, Classic-Ada supports the object-oriented principles of information hiding, data abstraction, dynamic binding, and inheritance. It also supports natural reuse and incremental development through inheritance, code factoring, and Ada, Classic-Ada, dynamic binding and static binding in the same program. Only nine new constructs were added to Ada to provide object-oriented design capabilities. The Classic-Ada code generator translates user application code into fully compliant, ready-to-run, standard Ada. The Classic-Ada toolset is fully supported by SPS and consists of an object generator, a builder, a dictionary manager, and a reporter. Demonstrations of Classic-Ada and the Classic-Ada Browser were given at the workshop.
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Papadakis, Michael
2005-01-01
Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.
Optimized nonorthogonal transforms for image compression.
Guleryuz, O G; Orchard, M T
1997-01-01
The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.
Microscale bioprocess optimisation.
Micheletti, Martina; Lye, Gary J
2006-12-01
Microscale processing techniques offer the potential to speed up the delivery of new drugs to the market, reducing development costs and increasing patient benefit. These techniques have application across both the chemical and biopharmaceutical sectors. The approach involves the study of individual bioprocess operations at the microlitre scale using either microwell or microfluidic formats. In both cases the aim is to generate quantitative bioprocess information early on, so as to inform bioprocess design and speed translation to the manufacturing scale. Automation can enhance experimental throughput and will facilitate the parallel evaluation of competing biocatalyst and process options.
Role of pump hydro in electric power systems
NASA Astrophysics Data System (ADS)
Bessa, R.; Moreira, C.; Silva, B.; Filipe, J.; Fulgêncio, N.
2017-04-01
This paper provides an overview of the expected role that variable speed hydro power plants can have in future electric power systems characterized by a massive integration of highly variable sources. Therefore, it is discussed the development of a methodology for optimising the operation of hydropower plants under increasing contribution from new renewable energy sources, addressing the participation of a hydropower plant with variable speed pumping in reserve markets. Complementarily, it is also discussed the active role variable speed generators can have in the provision of advanced frequency regulation services.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
Optimisation of active suspension control inputs for improved vehicle handling performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor
2016-11-01
Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.
Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A
2014-11-01
In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.
FlexibleSUSY-A spectrum generator generator for supersymmetric models
NASA Astrophysics Data System (ADS)
Athron, Peter; Park, Jae-hyeon; Stöckinger, Dominik; Voigt, Alexander
2015-05-01
We introduce FlexibleSUSY, a Mathematica and C++ package, which generates a fast, precise C++ spectrum generator for any SUSY model specified by the user. The generated code is designed with both speed and modularity in mind, making it easy to adapt and extend with new features. The model is specified by supplying the superpotential, gauge structure and particle content in a SARAH model file; specific boundary conditions e.g. at the GUT, weak or intermediate scales are defined in a separate FlexibleSUSY model file. From these model files, FlexibleSUSY generates C++ code for self-energies, tadpole corrections, renormalization group equations (RGEs) and electroweak symmetry breaking (EWSB) conditions and combines them with numerical routines for solving the RGEs and EWSB conditions simultaneously. The resulting spectrum generator is then able to solve for the spectrum of the model, including loop-corrected pole masses, consistent with user specified boundary conditions. The modular structure of the generated code allows for individual components to be replaced with an alternative if available. FlexibleSUSY has been carefully designed to grow as alternative solvers and calculators are added. Predefined models include the MSSM, NMSSM, E6SSM, USSM, R-symmetric models and models with right-handed neutrinos.
Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks
2015-04-01
UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace
NASA Astrophysics Data System (ADS)
Harré, Michael S.
2013-02-01
Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.
Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.
Palkowski, Marek; Bielecki, Wlodzimierz
2018-01-15
RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.
Optimisation techniques in vaginal cuff brachytherapy.
Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A
2009-11-01
The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.
"SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres
NASA Astrophysics Data System (ADS)
Sapar, A.; Poolamäe, R.
2003-01-01
A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.
Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina
2010-09-29
To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
Modeling Guidelines for Code Generation in the Railway Signaling Context
NASA Technical Reports Server (NTRS)
Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo
2009-01-01
Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these recommendations has been performed for the automotive control systems domain in order to enforce code generation [7]. The MAAB guidelines have been found profitable also in the aerospace/avionics sector [1] and they have been adopted by the MathWorks Aerospace Leadership Council (MALC). General Electric Transportation Systems (GETS) is a well known railway signaling systems manufacturer leading in Automatic Train Protection (ATP) systems technology. Inside an effort of adopting formal methods within its own development process, GETS decided to introduce system modeling by means of the MathWorks tools [2], and in 2008 chose to move to code generation. This article reports the experience performed by GETS in developing its own modeling standard through customizing the MAAB rules for the railway signaling domain and shows the result of this experience with a successful product development story.
ART/Ada design project, phase 1. Task 3 report: Test plan
NASA Technical Reports Server (NTRS)
Allen, Bradley P.
1988-01-01
The plan is described for the integrated testing and benchmark of Phase Ada based ESBT Design Research Project. The integration testing is divided into two phases: (1) the modules that do not rely on the Ada code generated by the Ada Generator are tested before the Ada Generator is implemented; and (2) all modules are integrated and tested with the Ada code generated by the Ada Generator. Its performance and size as well as its functionality is verified in this phase. The target platform is a DEC Ada compiler on VAX mini-computers and VAX stations running the VMS operating system.
User's manual for PRESTO: A computer code for the performance of regenerative steam turbine cycles
NASA Technical Reports Server (NTRS)
Fuller, L. C.; Stovall, T. K.
1979-01-01
Standard turbine cycles for baseload power plants and cycles with such additional features as process steam extraction and induction and feedwater heating by external heat sources may be modeled. Peaking and high back pressure cycles are also included. The code's methodology is to use the expansion line efficiencies, exhaust loss, leakages, mechanical losses, and generator losses to calculate the heat rate and generator output. A general description of the code is given as well as the instructions for input data preparation. Appended are two complete example cases.
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
Loft: An Automated Mesh Generator for Stiffened Shell Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
2011-01-01
Loft is an automated mesh generation code that is designed for aerospace vehicle structures. From user input, Loft generates meshes for wings, noses, tanks, fuselage sections, thrust structures, and so on. As a mesh is generated, each element is assigned properties to mark the part of the vehicle with which it is associated. This property assignment is an extremely powerful feature that enables detailed analysis tasks, such as load application and structural sizing. This report is presented in two parts. The first part is an overview of the code and its applications. The modeling approach that was used to create the finite element meshes is described. Several applications of the code are demonstrated, including a Next Generation Launch Technology (NGLT) wing-sizing study, a lunar lander stage study, a launch vehicle shroud shape study, and a two-stage-to-orbit (TSTO) orbiter. Part two of the report is the program user manual. The manual includes in-depth tutorials and a complete command reference.
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
Evaluation of the efficiency and reliability of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1994-01-01
There are numerous studies which show that CASE Tools greatly facilitate software development. As a result of these advantages, an increasing amount of software development is done with CASE Tools. As more software engineers become proficient with these tools, their experience and feedback lead to further development with the tools themselves. What has not been widely studied, however, is the reliability and efficiency of the actual code produced by the CASE Tools. This investigation considered these matters. Three segments of code generated by MATRIXx, one of many commercially available CASE Tools, were chosen for analysis: ETOFLIGHT, a portion of the Earth to Orbit Flight software, and ECLSS and PFMC, modules for Environmental Control and Life Support System and Pump Fan Motor Control, respectively.
Hypersonic code efficiency and validation studies
NASA Technical Reports Server (NTRS)
Bennett, Bradford C.
1992-01-01
Renewed interest in hypersonic and supersonic flows spurred the development of the Compressible Navier-Stokes (CNS) code. Originally developed for external flows, CNS was modified to enable it to also be applied to internal high speed flows. In the initial phase of this study CNS was applied to both internal flow applications and fellow researchers were taught to run CNS. The second phase of this research was the development of surface grids over various aircraft configurations for the High Speed Research Program (HSRP). The complex nature of these configurations required the development of improved surface grid generation techniques. A significant portion of the grid generation effort was devoted to testing and recommending modifications to early versions of the S3D surface grid generation code.
Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation
NASA Astrophysics Data System (ADS)
Ragni, Matteo
There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.
Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
NASA Astrophysics Data System (ADS)
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
NASA Technical Reports Server (NTRS)
Houston, Johnny L.
1990-01-01
Program EAGLE (Eglin Arbitrary Geometry Implicit Euler) is a multiblock grid generation and steady-state flow solver system. This system combines a boundary conforming surface generation, a composite block structure grid generation scheme, and a multiblock implicit Euler flow solver algorithm. The three codes are intended to be used sequentially from the definition of the configuration under study to the flow solution about the configuration. EAGLE was specifically designed to aid in the analysis of both freestream and interference flow field configurations. These configurations can be comprised of single or multiple bodies ranging from simple axisymmetric airframes to complex aircraft shapes with external weapons. Each body can be arbitrarily shaped with or without multiple lifting surfaces. Program EAGLE is written to compile and execute efficiently on any CRAY machine with or without Solid State Disk (SSD) devices. Also, the code uses namelist inputs which are supported by all CRAY machines using the FORTRAN Compiler CF177. The use of namelist inputs makes it easier for the user to understand the inputs and to operate Program EAGLE. Recently, the Code was modified to operate on other computers, especially the Sun Spare4 Workstation. Several two-dimensional grid configurations were completely and successfully developed using EAGLE. Currently, EAGLE is being used for three-dimension grid applications.
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.
Recent applications of the transonic wing analysis computer code, TWING
NASA Technical Reports Server (NTRS)
Subramanian, N. R.; Holst, T. L.; Thomas, S. D.
1982-01-01
An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.
User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.
1982-01-01
This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, Kenji; Ebata, Shigeo
1997-07-01
This paper summarizes the current and anticipated use of the thermal-hydraulic and neutronic codes for the BWR transient and accident analyses in Japan. The codes may be categorized into the licensing codes and the best estimate codes for the BWR transient and accident analyses. Most of the licensing codes have been originally developed by General Electric. Some codes have been updated based on the technical knowledge obtained in the thermal hydraulic study in Japan, and according to the BWR design changes. The best estimates codes have been used to support the licensing calculations and to obtain the phenomenological understanding ofmore » the thermal hydraulic phenomena during a BWR transient or accident. The best estimate codes can be also applied to a design study for a next generation BWR to which the current licensing model may not be directly applied. In order to rationalize the margin included in the current BWR design and develop a next generation reactor with appropriate design margin, it will be required to improve the accuracy of the thermal-hydraulic and neutronic model. In addition, regarding the current best estimate codes, the improvement in the user interface and the numerics will be needed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation.
Ertel, K; Banerjee, S; Mason, P D; Phillips, P J; Siebold, M; Hernandez-Gomez, C; Collier, J C
2011-12-19
We present a numerical model of a pulsed, diode-pumped Yb:YAG laser amplifier for the generation of high energy ns-pulses. This model is used to explore how optical-to-optical efficiency depends on factors such as pump duration, pump spectrum, pump intensity, doping concentration, and operating temperature. We put special emphasis on finding ways to achieve high efficiency within the practical limitations imposed by real-world laser systems, such as limited pump brightness and limited damage fluence. We show that a particularly advantageous way of improving efficiency within those constraints is operation at cryogenic temperature. Based on the numerical findings we present a concept for a scalable amplifier based on an end-pumped, cryogenic, gas-cooled multi-slab architecture.
Pulsed source of spectrally uncorrelated and indistinguishable photons at telecom wavelengths.
Bruno, N; Martin, A; Guerreiro, T; Sanguinetti, B; Thew, R T
2014-07-14
We report on the generation of indistinguishable photon pairs at telecom wavelengths based on a type-II parametric down conversion process in a periodically poled potassium titanyl phosphate (PPKTP) crystal. The phase matching, pump laser characteristics and coupling geometry are optimised to obtain spectrally uncorrelated photons with high coupling efficiencies. Four photons are generated by a counter-propagating pump in the same crystal and anlysed via two photon interference experiments between photons from each pair source as well as joint spectral and g((2)) measurements. We obtain a spectral purity of 0.91 and coupling efficiencies around 90% for all four photons without any filtering. These pure indistinguishable photon sources at telecom wavelengths are perfectly adapted for quantum network demonstrations and other multi-photon protocols.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Putt, C. W.; Giamati, C. C.
1981-01-01
Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.
IGGy: An interactive environment for surface grid generation
NASA Technical Reports Server (NTRS)
Prewitt, Nathan C.
1992-01-01
A graphically interactive derivative of the EAGLE boundary code is presented. This code allows the user to interactively build and execute commands and immediately see the results. Strong ties with a batch oriented script language are maintained. A generalized treatment of grid definition parameters allows a more generic definition of the grid generation process and allows the generation of command scripts which can be applied to topologically similar configurations. The use of the graphical user interface is outlined and example applications are presented.
Hanjabam, Mandakini Devi; Kannaiyan, Sathish Kumar; Kamei, Gaihiamngam; Jakhar, Jitender Kumar; Chouksey, Mithlesh Kumar; Gudipati, Venkateshwarlu
2015-02-01
Physical properties of gelatin extracted from Unicorn leatherjacket (Aluterus monoceros) skin, which is generated as a waste from fish processing industries, were optimised using Response Surface Methodology (RSM). A Box-Behnken design was used to study the combined effects of three independent variables, namely phosphoric acid (H3PO4) concentration (0.15-0.25 M), extraction temperature (40-50 °C) and extraction time (4-12 h) on different responses like yield, gel strength and melting point of gelatin. The optimum conditions derived by RSM for the yield (10.58%) were 0.2 M H3PO4 for 9.01 h of extraction time and hot water extraction of 45.83 °C. The maximum achieved gel strength and melting point was 138.54 g and 22.61 °C respectively. Extraction time was found to be most influencing variable and had a positive coefficient on yield and negative coefficient on gel strength and melting point. The results indicated that Unicorn leatherjacket skins can be a source of gelatin having mild gel strength and melting point.
Garcia, Justine; Yang, ZhiLin; Mongrain, Rosaire; Leask, Richard L; Lachapelle, Kevin
2018-01-01
3D printing is a new technology in constant evolution. It has rapidly expanded and is now being used in health education. Patient-specific models with anatomical fidelity created from imaging dataset have the potential to significantly improve the knowledge and skills of a new generation of surgeons. This review outlines five technical steps required to complete a printed model: They include (1) selecting the anatomical area of interest, (2) the creation of the 3D geometry, (3) the optimisation of the file for the printing and the appropriate selection of (4) the 3D printer and (5) materials. All of these steps require time, expertise and money. A thorough understanding of educational needs is therefore essential in order to optimise educational value. At present, most of the available printing materials are rigid and therefore not optimum for flexibility and elasticity unlike biological tissue. We believe that the manipuation and tuning of material properties through the creation of composites and/or blending materials will eventually allow for the creation of patient-specific models which have both anatomical and tissue fidelity. PMID:29354281
Optimal Earth's reentry disposal of the Galileo constellation
NASA Astrophysics Data System (ADS)
Armellin, Roberto; San-Juan, Juan F.
2018-02-01
Nowadays there is international consensus that space activities must be managed to minimize debris generation and risk. The paper presents a method for the end-of-life (EoL) disposal of spacecraft in Medium Earth Orbit (MEO). The problem is formulated as a multiobjective optimisation one, which is solved with an evolutionary algorithm. An impulsive manoeuvre is optimised to reenter the spacecraft in Earth's atmosphere within 100 years. Pareto optimal solutions are obtained using the manoeuvre Δv and the time-to-reentry as objective functions to be minimised. To explore at the best the search space a semi-analytical orbit propagator, which can propagate an orbit for 100 years in few seconds, is adopted. An in-depth analysis of the results is carried out to understand the conditions leading to a fast reentry with minimum propellant. For this aim a new way of representing the disposal solutions is introduced. With a single 2D plot we are able to fully describe the time evolution of all the relevant orbital parameters as well as identify the conditions that enables the eccentricity build-up. The EoL disposal of the Galileo constellation is used as test case.
Pizzolato, Claudio; Lloyd, David G.; Sartori, Massimo; Ceseracciu, Elena; Besier, Thor F.; Fregly, Benjamin J.; Reggiani, Monica
2015-01-01
Personalized neuromusculoskeletal (NMS) models can represent the neurological, physiological, and anatomical characteristics of an individual and can be used to estimate the forces generated inside the human body. Currently, publicly available software to calculate muscle forces are restricted to static and dynamic optimisation methods, or limited to isometric tasks only. We have created and made freely available for the research community the Calibrated EMG-Informed NMS Modelling Toolbox (CEINMS), an OpenSim plug-in that enables investigators to predict different neural control solutions for the same musculoskeletal geometry and measured movements. CEINMS comprises EMG-driven and EMG-informed algorithms that have been previously published and tested. It operates on dynamic skeletal models possessing any number of degrees of freedom and musculotendon units and can be calibrated to the individual to predict measured joint moments and EMG patterns. In this paper we describe the components of CEINMS and its integration with OpenSim. We then analyse how EMG-driven, EMG-assisted, and static optimisation neural control solutions affect the estimated joint moments, muscle forces, and muscle excitations, including muscle co-contraction. PMID:26522621
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
Capacity-optimized mp2 audio watermarking
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Dittmann, Jana
2003-06-01
Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.
Pender, Alexandra; Garcia-Murillas, Isaac; Rana, Sareena; Cutts, Rosalind J; Kelly, Gavin; Fenwick, Kerry; Kozarewa, Iwanka; Gonzalez de Castro, David; Bhosle, Jaishree; O'Brien, Mary; Turner, Nicholas C; Popat, Sanjay; Downward, Julian
2015-01-01
Droplet digital PCR (ddPCR) can be used to detect low frequency mutations in oncogene-driven lung cancer. The range of KRAS point mutations observed in NSCLC necessitates a multiplex approach to efficient mutation detection in circulating DNA. Here we report the design and optimisation of three discriminatory ddPCR multiplex assays investigating nine different KRAS mutations using PrimePCR™ ddPCR™ Mutation Assays and the Bio-Rad QX100 system. Together these mutations account for 95% of the nucleotide changes found in KRAS in human cancer. Multiplex reactions were optimised on genomic DNA extracted from KRAS mutant cell lines and tested on DNA extracted from fixed tumour tissue from a cohort of lung cancer patients without prior knowledge of the specific KRAS genotype. The multiplex ddPCR assays had a limit of detection of better than 1 mutant KRAS molecule in 2,000 wild-type KRAS molecules, which compared favourably with a limit of detection of 1 in 50 for next generation sequencing and 1 in 10 for Sanger sequencing. Multiplex ddPCR assays thus provide a highly efficient methodology to identify KRAS mutations in lung adenocarcinoma.
Pender, Alexandra; Garcia-Murillas, Isaac; Rana, Sareena; Cutts, Rosalind J.; Kelly, Gavin; Fenwick, Kerry; Kozarewa, Iwanka; Gonzalez de Castro, David; Bhosle, Jaishree; O’Brien, Mary; Turner, Nicholas C.; Popat, Sanjay; Downward, Julian
2015-01-01
Droplet digital PCR (ddPCR) can be used to detect low frequency mutations in oncogene-driven lung cancer. The range of KRAS point mutations observed in NSCLC necessitates a multiplex approach to efficient mutation detection in circulating DNA. Here we report the design and optimisation of three discriminatory ddPCR multiplex assays investigating nine different KRAS mutations using PrimePCR™ ddPCR™ Mutation Assays and the Bio-Rad QX100 system. Together these mutations account for 95% of the nucleotide changes found in KRAS in human cancer. Multiplex reactions were optimised on genomic DNA extracted from KRAS mutant cell lines and tested on DNA extracted from fixed tumour tissue from a cohort of lung cancer patients without prior knowledge of the specific KRAS genotype. The multiplex ddPCR assays had a limit of detection of better than 1 mutant KRAS molecule in 2,000 wild-type KRAS molecules, which compared favourably with a limit of detection of 1 in 50 for next generation sequencing and 1 in 10 for Sanger sequencing. Multiplex ddPCR assays thus provide a highly efficient methodology to identify KRAS mutations in lung adenocarcinoma. PMID:26413866
NASA Astrophysics Data System (ADS)
Miotk, R.; Jasiński, M.; Mizeraczyk, J.
2018-03-01
This paper presents the partial electromagnetic optimisation of a 2.45 GHz cylindrical-type microwave plasma source (MPS) operated at atmospheric pressure. The presented device is designed for hydrogen production from liquid fuels, e.g. hydrocarbons and alcohols. Due to industrial requirements regarding low costs for hydrogen produced in this way, previous testing indicated that improvements were required to the electromagnetic performance of the MPS. The MPS has a duct discontinuity region, which is a result of the cylindrical structure located within the device. The microwave plasma is generated in this discontinuity region. Rigorous analysis of the region requires solving a set of Maxwell equations, which is burdensome for complicated structures. Furthermore, the presence of the microwave plasma increases the complexity of this task. To avoid calculating the complex Maxwell equations, we suggest the use of the equivalent circuit method. This work is based upon the idea of using a Weissfloch circuit to characterize the area of the duct discontinuity and the plasma. The resulting MPS equivalent circuit allowed the calculation of a capacitive metallic diaphragm, through which an improvement in the electromagnetic performance of the plasma source was obtained.
Garcia, Justine; Yang, ZhiLin; Mongrain, Rosaire; Leask, Richard L; Lachapelle, Kevin
2018-01-01
3D printing is a new technology in constant evolution. It has rapidly expanded and is now being used in health education. Patient-specific models with anatomical fidelity created from imaging dataset have the potential to significantly improve the knowledge and skills of a new generation of surgeons. This review outlines five technical steps required to complete a printed model: They include (1) selecting the anatomical area of interest, (2) the creation of the 3D geometry, (3) the optimisation of the file for the printing and the appropriate selection of (4) the 3D printer and (5) materials. All of these steps require time, expertise and money. A thorough understanding of educational needs is therefore essential in order to optimise educational value. At present, most of the available printing materials are rigid and therefore not optimum for flexibility and elasticity unlike biological tissue. We believe that the manipuation and tuning of material properties through the creation of composites and/or blending materials will eventually allow for the creation of patient-specific models which have both anatomical and tissue fidelity.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.
2010-05-01
Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.
Sensewheel: an adjunct to wheelchair skills training
Taylor, Stephen J.G.; Holloway, Catherine
2016-01-01
The purpose of this Letter was to investigate the influence of real-time verbal feedback to optimise push arc during over ground manual wheelchair propulsion. Ten healthy non-wheelchair users pushed a manual wheelchair for a distance of 25 m on level paving, initially with no feedback and then with real-time verbal feedback aimed at controlling push arc within a range of 85°–100°. The real-time feedback was provided by a physiotherapist walking behind the wheelchair, viewing real-time data on a tablet personal computer received from the Sensewheel, a lightweight instrumented wheelchair wheel. The real-time verbal feedback enabled the participants to significantly increase their push arc. This increase in push arc resulted in a non-significant reduction in push rate and a significant increase in peak force application. The intervention enabled participants to complete the task at a higher mean velocity using significantly fewer pushes. This was achieved via a significant increase in the power generated during the push phase. This Letter identifies that a lightweight instrumented wheelchair wheel such as the Sensewheel is a useful adjunct to wheelchair skills training. Targeting the optimisation of push arc resulted in beneficial changes in propulsion technique. PMID:28008362
Systemic solutions for multi-benefit water and environmental management.
Everard, Mark; McInnes, Robert
2013-09-01
The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.
2017-11-01
Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.
Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.
Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne
2017-01-01
Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.
Assay optimisation and technology transfer for multi-site immuno-monitoring in vaccine trials
Harris, Stephanie A.; Satti, Iman; Bryan, Donna; Walker, K. Barry; Dockrell, Hazel M.; McShane, Helen; Ho, Mei Mei
2017-01-01
Cellular immunological assays are important tools for the monitoring of responses to T-cell-inducing vaccine candidates. As these bioassays are often technically complex and require considerable experience, careful technology transfer between laboratories is critical if high quality, reproducible data that allows comparison between sites, is to be generated. The aim of this study, funded by the European Union Framework Program 7-funded TRANSVAC project, was to optimise Standard Operating Procedures and the technology transfer process to maximise the reproducibility of three bioassays for interferon-gamma responses: enzyme-linked immunosorbent assay (ELISA), ex-vivo enzyme-linked immunospot and intracellular cytokine staining. We found that the initial variability in results generated across three different laboratories reduced following a combination of Standard Operating Procedure harmonisation and the undertaking of side-by-side training sessions in which assay operators performed each assay in the presence of an assay ‘lead’ operator. Mean inter-site coefficients of variance reduced following this training session when compared with the pre-training values, most notably for the ELISA assay. There was a trend for increased inter-site variability at lower response magnitudes for the ELISA and intracellular cytokine staining assays. In conclusion, we recommend that on-site operator training is an essential component of the assay technology transfer process and combined with harmonised Standard Operating Procedures will improve the quality, reproducibility and comparability of data produced across different laboratories. These data may be helpful in ongoing discussions of the potential risk/benefit of centralised immunological assay strategies for large clinical trials versus decentralised units. PMID:29020010
"Any other comments?" Open questions on questionnaires – a bane or a bonus to research?
O'Cathain, Alicia; Thomas, Kate J
2004-01-01
Background The habitual "any other comments" general open question at the end of structured questionnaires has the potential to increase response rates, elaborate responses to closed questions, and allow respondents to identify new issues not captured in the closed questions. However, we believe that many researchers have collected such data and failed to analyse or present it. Discussion General open questions at the end of structured questionnaires can present a problem because of their uncomfortable status of being strictly neither qualitative nor quantitative data, the consequent lack of clarity around how to analyse and report them, and the time and expertise needed to do so. We suggest that the value of these questions can be optimised if researchers start with a clear understanding of the type of data they wish to generate from such a question, and employ an appropriate strategy when designing the study. The intention can be to generate depth data or 'stories' from purposively defined groups of respondents for qualitative analysis, or to produce quantifiable data, representative of the population sampled, as a 'safety net' to identify issues which might complement the closed questions. Summary We encourage researchers to consider developing a more strategic use of general open questions at the end of structured questionnaires. This may optimise the quality of the data and the analysis, reduce dilemmas regarding whether and how to analyse such data, and result in a more ethical approach to making best use of the data which respondents kindly provide. PMID:15533249
NASA Astrophysics Data System (ADS)
Desnijder, Karel; Hanselaer, Peter; Meuret, Youri
2016-04-01
A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.
Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P
2008-11-30
In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2014-01-01
Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Brantut, Nicolas
2018-02-01
Acoustic emission and active ultrasonic wave velocity monitoring are often performed during laboratory rock deformation experiments, but are typically processed separately to yield homogenised wave velocity measurements and approximate source locations. Here I present a numerical method and its implementation in a free software to perform a joint inversion of acoustic emission locations together with the three-dimensional, anisotropic P-wave structure of laboratory samples. The data used are the P-wave first arrivals obtained from acoustic emissions and active ultrasonic measurements. The model parameters are the source locations and the P-wave velocity and anisotropy parameter (assuming transverse isotropy) at discrete points in the material. The forward problem is solved using the fast marching method, and the inverse problem is solved by the quasi-Newton method. The algorithms are implemented within an integrated free software package called FaATSO (Fast Marching Acoustic Emission Tomography using Standard Optimisation). The code is employed to study the formation of compaction bands in a porous sandstone. During deformation, a front of acoustic emissions progresses from one end of the sample, associated with the formation of a sequence of horizontal compaction bands. Behind the active front, only sparse acoustic emissions are observed, but the tomography reveals that the P-wave velocity has dropped by up to 15%, with an increase in anisotropy of up to 20%. Compaction bands in sandstones are therefore shown to produce sharp changes in seismic properties. This result highlights the potential of the methodology to image temporal variations of elastic properties in complex geomaterials, including the dramatic, localised changes associated with microcracking and damage generation.
NASA Astrophysics Data System (ADS)
Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi
2014-12-01
Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.
A Plastic Temporal Brain Code for Conscious State Generation
Dresp-Langley, Birgitta; Durup, Jean
2009-01-01
Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms. PMID:19644552
Bar-Code System for a Microbiological Laboratory
NASA Technical Reports Server (NTRS)
Law, Jennifer; Kirschner, Larry
2007-01-01
A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.
Critical roles for a genetic code alteration in the evolution of the genus Candida.
Silva, Raquel M; Paredes, João A; Moura, Gabriela R; Manadas, Bruno; Lima-Costa, Tatiana; Rocha, Rita; Miranda, Isabel; Gomes, Ana C; Koerkamp, Marian J G; Perrot, Michel; Holstege, Frank C P; Boucherie, Hélian; Santos, Manuel A S
2007-10-31
During the last 30 years, several alterations to the standard genetic code have been discovered in various bacterial and eukaryotic species. Sense and nonsense codons have been reassigned or reprogrammed to expand the genetic code to selenocysteine and pyrrolysine. These discoveries highlight unexpected flexibility in the genetic code, but do not elucidate how the organisms survived the proteome chaos generated by codon identity redefinition. In order to shed new light on this question, we have reconstructed a Candida genetic code alteration in Saccharomyces cerevisiae and used a combination of DNA microarrays, proteomics and genetics approaches to evaluate its impact on gene expression, adaptation and sexual reproduction. This genetic manipulation blocked mating, locked yeast in a diploid state, remodelled gene expression and created stress cross-protection that generated adaptive advantages under environmental challenging conditions. This study highlights unanticipated roles for codon identity redefinition during the evolution of the genus Candida, and strongly suggests that genetic code alterations create genetic barriers that speed up speciation.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Codes over infinite family of rings : Equivalence and invariant ring
NASA Astrophysics Data System (ADS)
Irwansyah, Muchtadi-Alamsyah, Intan; Muchlis, Ahmad; Barra, Aleams; Suprijanto, Djoko
2016-02-01
In this paper, we study codes over the ring Bk=𝔽pr[v1,…,vk]/(vi2=vi,∀i =1 ,…,k ) . For instance, we focus on two topics, i.e. characterization of the equivalent condition between two codes over Bk using a Gray map into codes over finite field 𝔽pr, and finding generators for invariant ring of Hamming weight enumerator for Euclidean self-dual codes over Bk.
Auto-Coding UML Statecharts for Flight Software
NASA Technical Reports Server (NTRS)
Benowitz, Edward G; Clark, Ken; Watney, Garth J.
2006-01-01
Statecharts have been used as a means to communicate behaviors in a precise manner between system engineers and software engineers. Hand-translating a statechart to code, as done on some previous space missions, introduces the possibility of errors in the transformation from chart to code. To improve auto-coding, we have developed a process that generates flight code from UML statecharts. Our process is being used for the flight software on the Space Interferometer Mission (SIM).
Integrated circuit test-port architecture and method and apparatus of test-port generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teifel, John
A method and apparatus are provided for generating RTL code for a test-port interface of an integrated circuit. In an embodiment, a test-port table is provided as input data. A computer automatically parses the test-port table into data structures and analyzes it to determine input, output, local, and output-enable port names. The computer generates address-detect and test-enable logic constructed from combinational functions. The computer generates one-hot multiplexer logic for at least some of the output ports. The one-hot multiplexer logic for each port is generated so as to enable the port to toggle between data signals and test signals. Themore » computer then completes the generation of the RTL code.« less
Practice patterns of academic general thoracic and adult cardiac surgeons.
Ingram, Michael T; Wisner, David H; Cooke, David T
2014-10-01
We hypothesized that academic adult cardiac surgeons (CSs) and general thoracic surgeons (GTSs) would have distinct practice patterns of, not just case-mix, but also time devoted to outpatient care, involvement in critical care, and work relative value unit (wRVU) generation for the procedures they perform. We queried the University Health System Consortium-Association of American Medical Colleges Faculty Practice Solution Center database for fiscal years 2007-2008, 2008-2009, and 2009-2010 for the frequency of inpatient and outpatient current procedural terminology coding and wRVU data of academic GTSs and CSs. The Faculty Practice Solution Center database is a compilation of productivity and payer data from 86 academic institutions. The greatest wRVU generating current procedural terminology codes for CSs were, in order, coronary artery bypass grafting, aortic valve replacement, and mitral valve replacement. In contrast, open lobectomy, video-assisted thoracic surgery wedge, and video-assisted thoracic surgery lobectomy were greatest for GTSs. The 10 greatest wRVU-generating procedures for CSs generated more wRVUs than those for GTSs (P<.001). Although CSs generated significantly more hospital inpatient evaluation and management (E & M) wRVUs than did GTSs (P<.001), only 2.5% of the total wRVUs generated by CSs were from E & M codes versus 18.8% for GTSs. Critical care codes were 1.5% of total evaluation and management billing for both CSs and GTSs. Academic CSs and GTSs have distinct practice patterns. CSs receive greater reimbursement for services because of the greater wRVUs of the procedures performed compared with GTSs, and evaluation and management coding is a more important wRVU generator for GTSs. The results of our study could guide academic CS and GTS practice structure and time prioritization. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Incompressible SPH (ISPH) with fast Poisson solver on a GPU
NASA Astrophysics Data System (ADS)
Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.
2018-05-01
This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.
Teachers’ perceptions of aspects affecting seminar learning: a qualitative study
2013-01-01
Background Many medical schools have embraced small group learning methods in their undergraduate curricula. Given increasing financial constraints on universities, active learning groups like seminars (with 25 students a group) are gaining popularity. To enhance the understanding of seminar learning and to determine how seminar learning can be optimised it is important to investigate stakeholders’ views. In this study, we qualitatively explored the views of teachers on aspects affecting seminar learning. Methods Twenty-four teachers with experience in facilitating seminars in a three-year bachelor curriculum participated in semi-structured focus group interviews. Three focus groups met twice with an interval of two weeks led by one moderator. Sessions were audio taped, transcribed verbatim and independently coded by two researchers using thematic analysis. An iterative process of data reduction resulted in emerging aspects that influence seminar learning. Results Teachers identified seven key aspects affecting seminar learning: the seminar teacher, students, preparation, group functioning, seminar goals and content, course coherence and schedule and facilities. Important components of these aspects were: the teachers’ role in developing seminars (‘ownership’), the amount and quality of preparation materials, a non-threatening learning climate, continuity of group composition, suitability of subjects for seminar teaching, the number and quality of seminar questions, and alignment of different course activities. Conclusions The results of this study contribute to the unravelling of the ‘the black box’ of seminar learning. Suggestions for ways to optimise active learning in seminars are made regarding curriculum development, seminar content, quality assurance and faculty development. PMID:23399475
ERIC Educational Resources Information Center
Vu, Hai Ha
2017-01-01
As the younger generation in Vietnam increasingly switches between the English and the Vietnamese languages, numerous linguistic and sociocultural strictures arise. Foregrounding the preservice English-language teachers of this generation, this article locates them in a dilemma between the discourse of globalization and their code-switching…
NASA Technical Reports Server (NTRS)
Spradley, L.; Pearson, M.
1979-01-01
The General Interpolants Method (GIM), a three dimensional, time dependent, hybrid procedure for generating numerical analogs of the conversion laws, is described. The Navier-Stokes equations written for an Eulerian system are considered. The conversion of the GIM code to the STAR-100 computer, and the implementation of 'GIM-ON-STAR' is discussed.
Muharam, Yuswan; Warnatz, Jürgen
2007-08-21
A mechanism generator code to automatically generate mechanisms for the oxidation of large hydrocarbons has been successfully modified and considerably expanded in this work. The modification was through (1) improvement of the existing rules such as cyclic-ether reactions and aldehyde reactions, (2) inclusion of some additional rules to the code, such as ketone reactions, hydroperoxy cyclic-ether formations and additional reactions of alkenes, (3) inclusion of small oxygenates, produced by the code but not included in the handwritten C(1)-C(4) sub-mechanism yet, to the handwritten C(1)-C(4) sub-mechanism. In order to evaluate mechanisms generated by the code, simulations of observed results in different experimental environments have been carried out. Experimentally derived and numerically predicted ignition delays of n-heptane-air and n-decane-air mixtures in high-pressure shock tubes in a wide range of temperatures, pressures and equivalence ratios agree very well. Concentration profiles of the main products and intermediates of n-heptane and n-decane oxidation in jet-stirred reactors at a wide range of temperatures and equivalence ratios are generally well reproduced. In addition, the ignition delay times of different normal alkanes was numerically studied.
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
NASA Astrophysics Data System (ADS)
Astley, R. J.; Sugimoto, R.; Mustafi, P.
2011-08-01
Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.
Subotin, Michael; Davis, Anthony R
2016-09-01
Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Progress on China nuclear data processing code system
NASA Astrophysics Data System (ADS)
Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu
2017-09-01
China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
A grid generation system for multi-disciplinary design optimization
NASA Technical Reports Server (NTRS)
Jones, William T.; Samareh-Abolhassani, Jamshid
1995-01-01
A general multi-block three-dimensional volume grid generator is presented which is suitable for Multi-Disciplinary Design Optimization. The code is timely, robust, highly automated, and written in ANSI 'C' for platform independence. Algebraic techniques are used to generate and/or modify block face and volume grids to reflect geometric changes resulting from design optimization. Volume grids are generated/modified in a batch environment and controlled via an ASCII user input deck. This allows the code to be incorporated directly into the design loop. Generated volume grids are presented for a High Speed Civil Transport (HSCT) Wing/Body geometry as well a complex HSCT configuration including horizontal and vertical tails, engine nacelles and pylons, and canard surfaces.
Support for Systematic Code Reviews with the SCRUB Tool
NASA Technical Reports Server (NTRS)
Holzmann, Gerald J.
2010-01-01
SCRUB is a code review tool that supports both large, team-based software development efforts (e.g., for mission software) as well as individual tasks. The tool was developed at JPL to support a new, streamlined code review process that combines human-generated review reports with program-generated review reports from a customizable range of state-of-the-art source code analyzers. The leading commercial tools include Codesonar, Coverity, and Klocwork, each of which can achieve a reasonably low rate of false-positives in the warnings that they generate. The time required to analyze code with these tools can vary greatly. In each case, however, the tools produce results that would be difficult to realize with human code inspections alone. There is little overlap in the results produced by the different analyzers, and each analyzer used generally increases the effectiveness of the overall effort. The SCRUB tool allows all reports to be accessed through a single, uniform interface (see figure) that facilitates brows ing code and reports. Improvements over existing software include significant simplification, and leveraging of a range of commercial, static source code analyzers in a single, uniform framework. The tool runs as a small stand-alone application, avoiding the security problems related to tools based on Web browsers. A developer or reviewer, for instance, must have already obtained access rights to a code base before that code can be browsed and reviewed with the SCRUB tool. The tool cannot open any files or folders to which the user does not already have access. This means that the tool does not need to enforce or administer any additional security policies. The analysis results presented through the SCRUB tool s user interface are always computed off-line, given that, especially for larger projects, this computation can take longer than appropriate for interactive tool use. The recommended code review process that is supported by the SCRUB tool consists of three phases: Code Review, Developer Response, and Closeout Resolution. In the Code Review phase, all tool-based analysis reports are generated, and specific comments from expert code reviewers are entered into the SCRUB tool. In the second phase, Developer Response, the developer is asked to respond to each comment and tool-report that was produced, either agreeing or disagreeing to provide a fix that addresses the issue that was raised. In the third phase, Closeout Resolution, all disagreements are discussed in a meeting of all parties involved, and a resolution is made for all disagreements. The first two phases generally take one week each, and the third phase is concluded in a single closeout meeting.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nataf, J.M.; Winkelmann, F.
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less
Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nataf, J.M.; Winkelmann, F.
We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less
Vector Potential Generation for Numerical Relativity Simulations
NASA Astrophysics Data System (ADS)
Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian
2017-01-01
Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436
An Infrastructure for UML-Based Code Generation Tools
NASA Astrophysics Data System (ADS)
Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.
The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.
NASA Astrophysics Data System (ADS)
Maher, Robert; Alvarado, Alex; Lavery, Domaniç; Bayvel, Polina
2016-02-01
Optical fibre underpins the global communications infrastructure and has experienced an astonishing evolution over the past four decades, with current commercial systems transmitting data rates in excess of 10 Tb/s over a single fibre core. The continuation of this dramatic growth in throughput has become constrained due to a power dependent nonlinear distortion arising from a phenomenon known as the Kerr effect. The mitigation of fibre nonlinearities is an area of intense research. However, even in the absence of nonlinear distortion, the practical limit on the transmission throughput of a single fibre core is dominated by the finite signal-to-noise ratio (SNR) afforded by current state-of-the-art coherent optical transceivers. Therefore, the key to maximising the number of information bits that can be reliably transmitted over a fibre channel hinges on the simultaneous optimisation of the modulation format and code rate, based on the SNR achieved at the receiver. In this work, we use an information theoretic approach based on the mutual information and the generalised mutual information to characterise a state-of-the-art dual polarisation m-ary quadrature amplitude modulation transceiver and subsequently apply this methodology to a 15-carrier super-channel to achieve the highest throughput (1.125 Tb/s) ever recorded using a single coherent receiver.